• February 10, 2023
  • Paul Clewett
  • 0

Faster. Stronger. Better? Ethical? Five points to consider when using AI for your research.

How do you write about artificial intelligence without eye rolls and groans? Well, rather than ramming ways to “100x YOUR PRODUCTIVITY” down your throat, we thought we’d give you some insight into how we’re thinking about it at Emani, a team of (international development and risk) research and crowdsourcing geeks.

The hype is around Chat GPT-3, an automated chatbot that can do an impressive number of information-related, human-like things (Business Insider explains more). Then, there are a ton of other, existing tools (from Wordtune to ResearchRabbit) dedicated to the specialised tasks a researcher might carry out. 

But cool is not enough. We need to know whether it’s worth it for our specific business requirements. And the pitfalls of taking what may turn out to be foolhardy shortcuts. But in the spirit of shortcuts – and a digestible blog post – we focus on a cursory look at Chat-GPT. 

Mo’ help

Let’s imagine we are conducting research on behalf of a European government to better understand why people are joining ISIS. What are the key tasks, very broadly speaking, that AI take off our hands?






What’s the point?

In seconds, Chat-GPT can…


Justify the methods and anticipate any ethical and security risks. 

Generate a list of possible risks. 

Desk review

Review the work that has been done already on the subject. 

Give an overview of the literature (but not a comprehensive review), create a reading list, help you understand the context really fast (see image, inset), and pinpoint what others say are the gaps. 


Collect original data through surveys – perhaps with people we deem most likely to join ISIS as the respondents. 

Rephrase survey questions for relevance, awkwardness, and sensitivity, estimate survey completion times, generate ideas for distribution, highlight potential biases.


Analyse the data descriptively then interpret it, to work out what it all means. 

Summarise data sets, turn plain English into the commands and code you need for more complex analysis in Stata or Python, generate possible explanations for results.


Communicate our findings to the client and others.  

Rewrite sections more concisely, correct grammar, flag sentences that don’t fit the tone, suggest tools for visualisation. 


A conversation between the author and Chat-GPT, in which Chat-GPT successfully outlines the reasons why someone might want to join ISIS.

Mo’ problems

ChatGPT’s limitations, such as its lack of domain expertise, human perspective, and ethical assessment, raise concerns about the reliability of its outputs. Bias is a particular issue with the tool. Its ability to spit out answers depends on the material it has been fed during its ‘training’. Inevitably, it is less articulate on subjects that have been talked about less  online. Plus, it currently knows very little of what happened after 2021. So analysis of fast-moving issues like the impact of election fever on unrest in Nigeria would be particularly weak. In short, over-reliance on this technology can result in low-quality, ethically questionable outputs.

There’s an added issue with systemisation of any kind in this field: the information won’t have passed through your brain in the same way, so your understanding won’t be so deep. A decade ago, we worried that Google was destroying our memory. But do we want to remember things or find apt answers to questions that help solve real problems? The answer probably depends on who you are.

Return on investment

A bigger question for us at Emani is whether it is all worth the effort and dollars. Let’s say it takes three full days to get the hang of a new tool like this (spread into chunks of varying frustration) and you’ve got five people on your team earning a combined $2,000 per day. So $6,000 of time to learn how to use these tools.

For argument’s sake, let’s estimate subscription fees at $2,000 per year (Chat GPT-3 is free now, but best not to bet the house on it remaining so, plus there are other tools you probably want in your suite). That totals $8,000 for the first year.

If we can reduce one desk review by This is no scientific algorithm, but the total of $8,000 does seem paltry compared to the dozens of days that are likely to be shaved off research projects within that first year alone.


Leave a Reply

Your email address will not be published. Required fields are marked *