Uncategorized

AI in Research: Its Uses and Limitations

By 05/04/2024

This is the second blog of  a two-part overview of using AI. To read the first blog, click here.

The term ‘artificial intelligence’ (AI) was first coined by John McCarthy at a conference in Dartmouth in 1956. Fast forward to 2022, when OpenAI announced the groundbreaking release of ChatGPT, an online chatbot that enables users to interact with the GPT-3.5 language model.

In the past decade, the disruptive technology of AI has made it easier and faster to automate processes in many industries. With the March 2023 release of GPT 4, the platform attracted even more attention from researchers, due to its enhanced reasoning abilities. This is not surprising: ChatGPT and other large language models (LLM) have enormous value across the entire value chain of research.  They have potential applications in the automation of research techniques – from generating a hypothesis, to conducting the research itself –  and in accessing large banks of information; tackling issues related to peer-reviewing; searching published content; detecting plagiarism; and fact-checking data. 

ChatGPT and its counterparts are here to stay. For this reason, it is crucial to understand its capabilities in the research field, as well as its limitations and potential ethical shortcomings.

1. Streamlining peer-reviewing

Traditionally a time-consuming process, peer-reviewing involves experts reviewing each other’s research manuscripts before publication. AI emerges as a natural collaborator in this task, making peer-reviewing far more efficient by automating the initial stages.

Accelerating the peer-review process ensures that groundbreaking research makes a timely entrance into public discourse and policy debate. This increases the chances that the research will inform decision-making, especially in the fast-paced policy context. 

2. Navigating the literature

Literature reviews are another fundamental component of research, which synthesise the existing knowledge in a field of research.  AI is transforming this process, offering assistance in the identification, analysis, and synthesis of relevant literature.

By automating the process, AI helps researchers to access and quickly summarise the existing body of work, making it easier to identify gaps, trends, and emerging themes efficiently. Natural Language Processing (NLP) algorithms analyse content, helping researchers identify relevant studies more swiftly.  

With regards to ChatGPT specifically – while it can assist in collecting preliminary insights on specific topics, and saves researchers time by synthesising and analysing relevant content, a comprehensive literature review still requires a thorough inspection by the expert.

Identifying genuine research gaps and creating an original hypothesis also require a degree of judgement and analysis on the part of the user. Researchers should definitely continue to consult primary sources and experts first, to ensure they’re using these emerging technologies as responsibly as possible.

3. Deriving insights from complex databases 

Data analysis is at the heart of rigorous research, and AI enhances this process. Machine learning (ML) algorithms can navigate vast databases: identifying patterns and correlations is one of its greatest strengths. This isn’t just about speed, however; it’s about uncovering nuanced insights that may be missed by humans.

However, AI relies heavily on the quality of input data and researchers need to be mindful of this fact. Biased or incomplete datasets can lead to inaccurate insights. Additionally, it can be extremely difficult – and sometimes impossible – to know how complex machine learning models have arrived at a particular decision. This is known as the ‘black box’ problem, and means it can be challenging for humans to understand how the model arrived at a particular conclusion or prediction based on its input data.

4. Forming global partnerships to address the digital divide

AI holds enormous potential for countries in the global south to overcome many barriers to achieving the sustainable development goals (SDGs). But accessibility and resource disparities still pose a significant challenge – between global north and south researchers, but also between institutions and individuals. The development and adoption of AI also poses unique challenges for these countries, especially regarding internet penetration, electricity connection, and concerns about the negative impacts of AI. The price tag on AI technology can be steep, and threaten to create a digital divide that exacerbates discrimination and human rights violations.

Currently, there is also a lack of frameworks and capacity for the application of AI that is appropriate within local contexts. With only 25 out of 54 African countries having the necessary data protection legislation, developing and emerging countries risk being left behind in the use and development of AI – or becoming dependent on western nations.

The very real ethical implications of our rapid adoption of AI are being explored, but not as fast as the tools themselves are being incorporated into the way we do business. A recent review of 800 academic journal articles and monographs concluded that AI-driven technologies have a pattern of entrenching social divides and exacerbating social inequality, particularly among historically marginalised groups. The study suggests that low- and middle-income countries may be more vulnerable to the negative social impacts of AI and less likely to benefit from the attendant gains.

Enter FAIR Forward – Artificial Intelligence for All. This German Development Cooperation initiative is working with seven partner countries – Ghana, Rwanda, Kenya, South Africa, Indonesia, Uganda, and India – to achieve a more sustainable international approach to AI. This is how FAIR Forward is harnessing these global partnerships to foster impact:

Expanding access to training data and AI technologies

Access to AI technology and training data is currently concentrated in more developed countries, contributing to the digital divide between the global north and south.

The first step is to remove entry barriers to making the most of AI technology, by providing open, unbiased and inclusive training data, models, and open-source AI applications. This will enable more collaboration and knowledge-sharing among researchers and practitioners across geographical boundaries.

Strengthening local technical know-how

FAIR Forward is supporting digital learning and training for the development and use of AI, and encourages cooperation with German and European research institutions.

Developing policy frameworks for ethical AI

If the aim is to align AI with our societal goals and values, then developers, researchers, and policymakers need to be equipped with the right tools to make decisions about the design, development and employment of AI systems. 

Advocating for ethical AI, data protection and privacy ensures that AI is value-based and rooted in human rights, accountability, transparency of decision-making, and privacy. 

Here are some other initiatives committed to bridging the digital divide in the use of AI technologies:

  • Data4Policy: A study commissioned to Technopolis Group, the Oxford Internet Institute, and CEPS that fostered rich discoveries and connections with practitioners, academics and governments with an appetite for experimentation and public sector innovation.
  • Atingi: A digital learning platform focused on providing locally relevant learning opportunities that address critical employment and educational skills gaps in emerging markets. 
  • The BMZ digilab: An innovation lab and booster for the best digital ideas in international cooperation, by the Federal Ministry for Economic Cooperation and Development (BMZ).
  • Lacuna Fund: The first global cooperative effort that addresses the shortage of training data in emerging and developing countries. Its work includes creating representative and current data in the global south.
  • GIZ’s FAIR Forward – AI for All. A German development cooperation striving for more inclusive, open, and sustainable AI. Follow their updates on X: @fair_forward 

Additional reading:

Who to follow: