An essential part of improving the availability and use of data in the countries where DataDENT works is understanding the needs and priorities of diverse national and sub-national actors engaged with the nutrition data value chain. We recently carried out multisector food and nutrition data assessments in Ethiopia and Nigeria that included key informant interviews (KIIs) with over 90 respondents per country. The KIIs surfaced important context-specific insights that are guiding actionable recommendations. However, these insights come at a cost; rigorous qualitative analysis requires substantial time and technical oversight.
DataDENT aims to develop and promote tailored methods that balance academic rigor with applied feasibility. Artificial Intelligence (AI) is already used as a powerful tool for idea generation, synthesis of existing literature and quantitative analysis. Yet AI’s application in qualitative text analysis—where we code, analyze, and synthesize complex human ideas—is still relatively unexplored.

Icon. Use of AI in qualitative data analysis
Inspired by our work testing AI tools for quantitative composite coverage analysis, DataDENT team in Ethiopia explored how AI tools can support more streamlined qualitative analysis.
Learning about the AI tools in ATLAS.ti
ATLAS.ti, the qualitative software used by our Ethiopia team, launched a beta version of AI coding functionality in 2023 in collaboration with OpenAI. As we started planning our analysis, there was limited information available about how to use the ATLAS.ti AI coding tools and whether they generated reliable outputs. So DataDENT explored the ATLAS.ti beta AI functionality using a subset of 30 interviews transcripts that had been translated into English from Amharic, Afan Oromo, and Somali. We tried out all four of the main AI tools in the current version of ATLAST.ti.
Coding tools (AI Coding, Intentional AI Coding): We developed a codebook in advance of starting our analysis, but none of the ATLAS.ti AI tools allow users to input a pre-existing codebook. This limits their application to analyses using deductive or a-priori codes or in ongoing projects with an established codebook. We found that, in its current form, the Intentional AI Coding produced far too many overlapping codes to be practical; many codes seemed incorrectly applied. AI Coding also produced many codes, but the codes largely made sense and were relatively accurately applied.
Chatbot tools for synthesis (Conversational AI, AI Summaries): After testing the two chatbot tools we concluded that the Conversational AI chatbot could help explore themes but should not be relied on for definitive analysis. AI Summaries provide basic overviews of documents, but they lacked depth and consistency. While these tools show promise, careful human oversight remains essential to ensure accuracy and relevance.
How did we use AI in our full analysis?
After we better understood the tools, DataDENT used AI Coding to analyze the full set of 100 interviews from Ethiopia. We also had team members code all the interviews manually, allowing us to compare the AI-assisted and “traditional” approaches. We did not use the AI tools for synthesis of coded transcripts.
We used a four-step process to apply the AI Coding tool. First, we uploaded and organized the transcripts into document groups. Second, we used the AI Coding tool in batches. The tool generated thousands of initial codes—far more than the original human-developed codebook—requiring us to carefully review, merge, and delete codes through several iterations to arrive at a streamlined, relevant set of codes. This step highlighted that while AI can produce a broad range of codes, human expertise is still essential to interpret, consolidate, and align them with clear research questions.
Third, the team used ATLAS.ti’s named entity recognition and text search features to identify important information mentioned in the interviews that was not tagged by AI Coding, including names of specific surveys and government offices. Finally, we conducted a detailed human review to check for accuracy, adjust quotation lengths, and ensure that the final coding reflected the nuances of the data.
Overall, our experience demonstrated that while AI Coding can rapidly generate and apply codes, it still requires significant human judgment to deliver meaningful, high-quality analysis.
What are the pros and cons of AI coding?
Our experience shows that today’s AI coding tools like those in ATLAS.ti can be a valuable support for more traditional qualitative text analysis—especially when researchers need to analyze large volumes of qualitative data with limited resources. AI can help broaden the analysts’ perspective and strengthen a codebook during early analysis stages. In our experience, the AI Coding tool generated the same core codes as our team, and it surfaced additional inductive codes that we found useful for deeper analysis.
However, AI coding is not without its drawbacks. The tool produced a very large and sometimes duplicative set of codes, requiring extensive human review and consolidation—similar to what might be needed when supervising a less experienced human coder. AI’s coding errors were often more predictable than those from the human team and AI sometimes misinterpreted text literally, missed important context, or failed to apply all relevant codes. We found that AI tools perform best when the data and research questions are relatively straightforward, and when experienced researchers can invest time in careful review of outputs.
Overall, our experience underscores the complementarity of AI-generated coding and human coding to ensure robustness, quality and reliability. AI Coding can also be a useful tool for rapid, exploratory analysis. Even though our focus was on one software platform, this experience raises broader possibilities as AI tools for qualitative analysis continue to advance. They will help make analysis of qualitative data more scalable and make it possible to integrate qualitative data into routine nutrition data systems and quantitative household surveys.
We encourage others to explore and experiment with these emerging tools and share your experiences with the broader nutrition community so together we can find the right balance between technological efficiency and rigorous human oversight.