Are you working with newcomers and curious about evidence-based practices to support your clients, but you have limited time? Did you know that generative artificial intelligence (GenAI) can help you begin the research needed to help identify evidence-based interventions more efficiently. This blog walks you through how to use generative AI responsibly to support research processes.
Every program decision, from employment supports to mental health interventions, benefits from a clear understanding of what works and why. Evidence-based practice gives providers a reliable foundation for those decisions by combining research findings, practitioner insights, and client preferences.
Generative artificial intelligence (GenAI) can support this work by speeding up the early stages of research: identifying relevant studies, organizing information, and helping to surface patterns. But it also introduces risks—hallucinated citations, oversimplified summaries, and gaps in nuance that matter for real-world implementation.
Where GenAI Fits in Your Research Process
Below are some ways GenAI can support different stages of applied research:
| Research Stage | How GenAI Can Help* |
| Initial Brainstorming | Identify key terms, refine search criteria, explore related concepts. Tools: ChatGPT, Perplexity, Copilot |
| Literature Review | Initial search to identify relevant studies, refine search protocol. Tools: Elicit, Research Rabbit, Consensus, Google Scholar |
| Organization & Analysis | Summarize key findings. Tools: NotebookLM, Grammarly |
| Writing & Reporting | Draft findings, create visualizations, refine language. Tools: Grammarly, Canva |
These techniques focus on the early stages of research, from brainstorming through initial analysis, where GenAI can help accelerate your work, but where the risks of AI bias and hallucination (incorrect or nonsensical outputs) require vigilance.
Four Techniques for Better Results
The quality of AI outputs depends heavily on how you design your questions as well as the model used and context. Prompt engineering (i.e., drafting clear and specific instructions) is a useful strategy.
To illustrate this, we’ll follow Maya, a program coordinator at a resettlement agency tasked with finding evidence-based approaches for a new employment initiative. She wants to know what programmatic strategies can improve early employment outcomes for refugees. Maya is somewhat familiar with evidence-based programming methodology and is using the PICO framework to further refine her research question. PICO is a common framework that breaks queries into Population, Intervention, Comparison, and Outcomes.
Maya’s question: Among newly arrived refugees in the U.S. (P), how do early employment programs (e.g., rapid job placement) (I), compared to standard resettlement services (e.g., longer-term career advancement) (C), influence job placement rates and time to first employment (O)?
This question will guide Maya’s methodology and search process. She’ll return to its components throughout the steps below, using them to determine what expertise she needs, narrow her search parameters, and structure her outputs. When she first enters her question into a chat model (see table above), she quickly realizes she will need to refine her approach to using GenAI to research her topic effectively.
1. Assign AI a Role
Giving AI a specific role shapes its frame of reference. A “research librarian” prioritizes academic sources. A “program evaluator” focuses on measurable outcomes. A “policy analyst” emphasizes implementation context. Including a role with elements from your PICO question can guide the search and help identify relevant studies.
Maya’s first attempt: She types “what works for refugee employment” into ChatGPT and gets a generic response that makes unverified claims, has limited scholarly sources, and offers a generic summary. She can’t tell what’s credible or relevant to her population.
Her revision (with the improved sections in bold): “Act as a research librarian specializing in refugee resettlement. Identify studies on early employment interventions for newly arrived refugees in the United States.” By specifying a role aligned with her population and intervention, the results should shift toward academic sources with clearer methodological grounding, but most models will not likely identify all major sources.
2. Specify Source Type and Additional Inclusion Criteria (e.g., date range, journals)
Being explicit about source requirements, such as specific peer-reviewed journals, specific date ranges, and verifiable links, and may reduce hallucinations. You can reference your PICO elements to narrow the search: studies about your specific population, your type of intervention, and your outcomes of interest.
Maya’s problem: She finds a citation that looks perfect for her PICO question, with the right population, intervention, and specific outcomes. But when she searches for the article, it doesn’t exist. The AI fabricated it, complete with a plausible journal name and author.
Her revision: “Identify peer-reviewed studies published within the last 10 years (2015–2025) on early employment interventions for newly arrived refugees in the United States. Focus on studies examining job training programs, employer partnerships, or case management approaches. For each study, provide: 1. The full APA citation, 2. The methodology used (e.g., RCT, qualitative, longitudinal). Prioritize studies from journals in social work, migration studies, or public policy.” By adding a date range, example journals, and specific outcomes, Maya gets more targeted results. She still checks every citation, but fewer turn out to be fabricated.
3. Examine the Evidence and Specify the Audience You Want to Share the Findings With
When identifying evidence-based practices, it’s helpful to share your preliminary research findings with team members to gather input. If using GenAI to summarize or revise findings, specify your audience to adjust the language and complexity of the AI’s response. For example, academic audiences may get theoretical responses and statistical details. Practitioners may get implementation insights. Funders may get outcome summaries.
Maya’s problem: After locating verified articles from her initial GenAI-supported search, Maya screens each study as part of her evidence-based methodology to determine whether it fits her research question and then assesses the quality of the evidence. She extracts the key findings, summarizes the main interventions described, and rates the overall strength of the evidence so she can draft an evidence summary that accurately reflects what the research shows. However, she soon realizes that the summary she produced is difficult to read and not useful for practitioners. When she asks GenAI to “revise the summary,” the tool returns a response that is still filled with research jargon. She needs to communicate findings to her team, not defend a dissertation.
Her revision: Maya adds: “You are an expert in refugee resettlement program development. Using the source material provided below, create a practical summary for refugee service providers focused on program design. Describe each intervention mentioned (2–3 sentences each) and note the target population and outcomes measured. Audience: Assume readers are practitioners, not researchers—prioritize clarity and applicability over academic detail. [Source material below].” Now the summary offers a clearer explanation for the intended audience.
4. Use Clear Verbs and Formats
Use specific verbs like compare, list, summarize, or create a table to produce more useful outputs instead of rambling prose. Your PICO elements can become the structure of your request, serving as column headers or organizing categories.
Maya’s problem: She instructs the AI, “tell me about the evidence about early employment programs” and gets a list of bullet points along with a vague descriptive paragraph. She wanted a comparison she could validate and then present to discuss with her team to determine what gaps might exist and identify steps to conduct further research. To accomplish this, Maya selects three open-access research studies that offer promising evidence-based practices that could be adapted to her program’s context and client preferences—all supported by strong evidence, including two systematic reviews and one meta-analysis. After re-reading the studies, Maya drafts a summary of a similar intervention that includes the description of the intervention, the population, the outcomes, and the strength of evidence. Next, she uses GenAI to refine and compare the summary of her findings.
Her revision: “You are a program manager in refugee services. Create a table comparing the summaries of the three evidence-based practices provided below to help service providers evaluate intervention options to support early employment programming.
Table Structure:
| Column | Content Guidance |
| Intervention / Evidence-based practice (cite source) | Official or commonly used intervention or evidence-based practice |
| Target Population | Specific refugee subgroups (e.g., recent arrivals, women, youth) |
| Core Components | 2–3 key program elements or activities |
| Outcomes Measured | Primary metrics (e.g., job placement rate, wage levels, retention) |
| Strength of Evidence | Include assigned strength of evidence |
Formatting Guidelines:
- Keep cell content concise (2-3 sentences or brief bullet points)
- Note any gaps in the source material with “Not specified”
Audience: Refugee resettlement program managers comparing interventions for potential adoption.
[Source summaries below]”
Using the PICO framework and the column headers she created, Maya organizes the information into a structured format that helps her think through which interventions might best support early employment.
Bringing Research to Your Team
The next step for Maya is to bring the summary of her research findings to her team to discuss their implications and develop an implementation plan. Together, they will determine how to adapt relevant interventions based on client characteristics, culture, and preferences to ensure the services are delivered effectively and appropriately.
When evaluating evidence summaries with your team to consider adaptations and implementation strategies, GenAI can help streamline administrative tasks, such as drafting agendas, brainstorming questions, or organizing ideas, but it should support, not replace, your rigorous research methodology and analysis.
A useful rule of thumb is to conduct a brief “triangulation” of information where you compare research findings with client feedback and practitioner insights before making decisions about implementation. A colleague may catch contextual issues you missed, like interventions studied in settings that don’t match yours, or cultural factors that affect applicability to your population.
The Non-Negotiable: Verify Everything
Even with careful prompting, GenAI can hallucinate sources, misrepresent findings, and reflect biases in its training data. Many researchers have cautioned that more research is needed to address these ethical challenges. Although GenAI is often assumed to save time when researching evidence-based practices, all outputs still require careful verification to ensure accuracy. For this reason, teams should plan for and build verification time into their project or research workflows, rather than treating AI-generated results as final. Before acting on any AI-generated research-related outputs:
- Confirm citations exist. Prompt the tool to enable citation features by asking it to include citations in a certain format like APA or MLA. Verify each source by checking the link or searching for the article using a library, Google Scholar, or the journal’s website.
- Check that links work. Request a direct link in your prompt; ask for a publication, website, or DOI; then verify they lead to real sources.
- Read the article. Scan the abstract to determine if the study is relevant. If the study addresses the research question, read the article to determine if it is relevant to your population, intervention, and outcomes of interest.
- Cross-check queries. Run the same prompt across multiple platforms and assess studies for relevance, rigor, and applicability. Not all evidence is created equal. (See Switchboard’s Evidence Summary Protocol for more details.)
All AI-generated outputs, regardless of the platform or model, should be verified before being used in research or practice. To support this process, it may be helpful to use a rubric to assess how relevant, current, and accurate the output is as well as consider any legal or ethical considerations.
Privacy and Documentation
Always follow your organization’s guidance to protect client data and safeguard intellectual property. A few additional considerations can also help ensure you are applying proper data protection and privacy practices, including:
- Redact personal data. Even “de-identified” info, like location, may reveal sensitive information (i.e., personally identifiable information or PII). Carefully review and remove any contextual clues before sharing text with an AI platform.
- Follow copyright guidelines: Use materials legally by following copyright and intellectual property guidelines, and avoid uploading copyrighted materials into AI tools.
- Avoid plagiarism: When referencing research, always read the original work, verify accuracy, and cite the source directly rather than relying solely on AI-generated summaries.
- Be transparent about AI use. Note when and how AI assisted your research process. American Psychological Association (APA) Journals have some helpful guidance on this.
Note that while GenAI can help you begin the process of developing an evidence summary, it is only a starting point. A full literature review to examine the state of the evidence requires a more rigorous, human-led process that includes structured search strategies, clear criteria for selecting studies, careful appraisal of the evidence, and thoughtful synthesis of findings. For examples of evidence summaries using a systematic research process, explore resources available in Switchboard’s Evidence Database.
Team-Based Strategies for Using AI
If AI tools become a regular part of your research process, consider developing a team-based approach to their use. A learning group or community of practice can help staff discuss safeguards, appropriate use cases, and how to manage risks like bias, privacy concerns, and inaccuracies. For a framework on evaluating organizational readiness to adopt AI solutions, see Switchboard’s Using AI in Service Delivery.
Learn More
- Using AI in Service Delivery: Switchboard’s framework for evaluating organizational readiness to adopt AI solutions, along with practical guidance.
- APA Journals Policy on Generative AI provides guidance on how to ethically use, disclose, and cite AI tools in research and writing to maintain transparency and academic integrity.
- Switchboard Archived Webinar: Research Evidence in Practice: Evidence-Based Interventions in Service Delivery
- Switchboard Blogs:
- Switchboard Information Guide: Implementation Science: Bridging the Evidence-to-Action Gap in Refugee Services
Get Support
Want help verifying research findings or planning how to implement an evidence-based practice? Submit a technical assistance request to the Switchboard Research team. We’re here to partner with you.
Important Notes:
To develop accurate information for this blog, portions of the content, such as example outputs, key questions, and draft language, were generated, edited, and tested using AI tools.
All AI-generated material should be critically reviewed and validated before being used, cited, or reproduced. These tools are meant to assist, not replace, human expertise, ethical oversight, or contextual judgment.
Switchboard does not endorse or promote the use of any specific AI platforms or tools mentioned in this blog. The examples provided are intended solely to illustrate the types of tools that exist and how they might function within different stages of an evidence-based practice workflow. Service providers should review their organizational policies, data protection guidelines, and ethical considerations before selecting or using any tool.
The IRC received competitive funding through the U.S. Department of Health and Human Services, Administration for Children and Families, Grant #90RB0053.
The project is 100% financed by federal funds. The contents of this document are solely the responsibility of the authors and do not necessarily represent the official views of the U.S. Department of Health and Human Services, Administration for Children and Families.






