Faster. Stronger. Better? Ethical? Five points to consider when using AI for your research.
How do you write about artificial intelligence without eye rolls and groans? Well, rather than ramming ways to “100x YOUR PRODUCTIVITY” down your throat, we thought we’d give you some insight into how we’re thinking about it at Emani, a team of (international development and risk) research and crowdsourcing geeks.
The hype is around Chat GPT-3, an automated chatbot that can do an impressive number of information-related, human-like things (Business Insider explains more). Then, there are a ton of other, existing tools (from Wordtune to ResearchRabbit) dedicated to the specialised tasks a researcher might carry out.
But cool is not enough. We need to know whether it’s worth it for our specific business requirements. And the pitfalls of taking what may turn out to be foolhardy shortcuts. But in the spirit of shortcuts – and a digestible blog post – we focus on a cursory look at Chat-GPT.
Let’s imagine we are conducting research on behalf of a European government to better understand why people are joining ISIS. What are the key tasks, very broadly speaking, that AI take off our hands?
ChatGPT’s limitations, such as its lack of domain expertise, human perspective, and ethical assessment, raise concerns about the reliability of its outputs. Bias is a particular issue with the tool. Its ability to spit out answers depends on the material it has been fed during its ‘training’. Inevitably, it is less articulate on subjects that have been talked about less online. Plus, it currently knows very little of what happened after 2021. So analysis of fast-moving issues like the impact of election fever on unrest in Nigeria would be particularly weak. In short, over-reliance on this technology can result in low-quality, ethically questionable outputs.
There’s an added issue with systemisation of any kind in this field: the information won’t have passed through your brain in the same way, so your understanding won’t be so deep. A decade ago, we worried that Google was destroying our memory. But do we want to remember things or find apt answers to questions that help solve real problems? The answer probably depends on who you are.
Return on investment
A bigger question for us at Emani is whether it is all worth the effort and dollars. Let’s say it takes three full days to get the hang of a new tool like this (spread into chunks of varying frustration) and you’ve got five people on your team earning a combined $2,000 per day. So $6,000 of time to learn how to use these tools.
For argument’s sake, let’s estimate subscription fees at $2,000 per year (Chat GPT-3 is free now, but best not to bet the house on it remaining so, plus there are other tools you probably want in your suite). That totals $8,000 for the first year.
If we can reduce one desk review by This is no scientific algorithm, but the total of $8,000 does seem paltry compared to the dozens of days that are likely to be shaved off research projects within that first year alone.