This week’s newsletter is longer than usual, as I wanted to cover this topic with the depth it deserves.
Last week, I introduced a series on AI and the grant funding process, focusing on the importance of developing an intentional relationship with these tools rather than using them haphazardly.
This week, we consider how to incorporate AI to support your thinking rather than interfere with it.
The term “AI” can mean different things in different contexts. For our purposes, we are talking about large language models (LLMs), a form of generative AI that produces replies based on your inputs and training data.
LLMs are true to their name: they are models. They generate language by identifying and predicting patterns, and the quality of what they return depends heavily on the quality of what you provide in your prompt and context. They do not have independent thoughts about your work, so it is more accurate to think of them as tools that sharpen, reflect, and sometimes amplify your own ideas rather than as objective counselors or independent thinkers.
In research and funding contexts, both originality and accuracy are essential. With this framing in mind, the key question becomes: Where do AI tools enter your thinking process?
Past newsletters described the complete research arc from idea inception, Spark, to its manifestation in the real world, Realization. We will use this ontology to frame how AI can support your work along each stage of this arc.
Spark
Spark is the origin of a line of research. It is the thought, next step, or unanswered question that triggers the research arc. Spark is where ideas begin. Here, AI can help craft and hone ideas into existence, or it can lead you down unending tangents.
How to Use AI
The most important aspect of brainstorming with AI and fostering a Spark is to provide proper context. Tell it who you are. Upload your bio or provide a link to your website. Upload your most representative papers. Describe your goals and ambitions, both conceptually and concretely. Let it get to know you. Imagine sitting down with a mentor. You would not just say, “Give me research ideas,” or, “Here is my idea, tell me what to do.” You would provide context first. Do the same with your AI tool. You must create an appropriate sounding board before bouncing ideas off of it.
For example, a faculty member might upload representative papers and describe the broader research direction they hope to build over the next five years, then ask the LLM to suggest adjacent unanswered questions or emerging tensions in the field that are consistent with that trajectory. In this case, the LLM is not setting the research agenda. It is helping surface possibilities that the researcher can then judge and refine.
Where to be Careful
Do not let AI make decisions for you because its responses sound intelligent and well composed. It may present things that appear to be good ideas, but are they technically sound? Do they resonate with you and your goals? It is programmed to sound supportive and encouraging, so make sure it is not talking you into something you would rather not do.
Container
Container is where you give shape to the Spark. It is where you decide what the project is, and what it is not. It describes the samples, methods, or data involved, and what the scope should or should not be. Container is about boundaries, structure, and clarity. AI can help you define these parameters and boundaries, or it can shift and flounder with every new input and lead you astray.
How to Use AI
At this stage, it is helpful to structure your interaction with the LLM step by step. These models often perform better when complex problems are broken into smaller, sequential parts, a process often referred to as chain-of-thought reasoning. While building Container, keep each chat session focused on one idea rather than several at once. You can provide as much context and detail as you would like, but prompts should stay targeted to that one idea or stage in the process. Think of it like writing a journal article: the Introduction, Methods, Results, Discussion, and Conclusions are connected, but each serves a distinct purpose. Your chat sessions should work the same way. If you keep the conversation focused, the LLM is more likely to return sharper and more useful responses.
For example, a researcher might have a promising project idea but not yet know whether it should become one proposal, two separate efforts, or perhaps a pilot project followed by a larger effort. They may still be deciding what belongs in scope for a three-year grant, what should be set aside for later, and whether one aim is too ambitious. The LLM can help break the idea into components, compare possible scopes, and identify what would realistically be in or out of bounds for the proposed effort. In this case, the LLM is helping organize and pressure-test the structure of the idea rather than defining it for the researcher.
Where to be Careful
Avoid bouncing between too many concepts and ideas at once or within a given chat session. AI will indeed respond, but, as mentioned above, its output will be reflective of your inputs. You may have three different thoughts in your head, but provide them one at a time and give the AI a chance to respond before moving to the next one. Most LLMs are capable of handling a sequence of instructions, but overall, if that sequence is focused on one topic or idea, the results, too, will be sharper and more informative.
Container is also the stage where an idea is especially vulnerable to drift at the project level. Even if each individual prompt is focused, repeated additions, side questions, and interesting detours can gradually expand the project beyond what is coherent or feasible. At this stage, the goal is not to keep generating more possibilities, but to decide what belongs in the project and what should be set aside for later.
Extension
Extension is the outward-facing phase where the work moves beyond you. It includes meeting requests, proposals, feedback from reviewers and peers, and interaction with institutional systems. It also involves alignment, such as with sponsor priorities, external standards, and timelines. AI can keep you on track and accelerate this process or bog you down with endless discussions and permutations.
How to Use AI
AI, meaning LLMs, works best in Extension when you give it clear guidelines and boundaries. If you keep asking for more revisions, more options, or more variations, it will continue generating them. In this phase, make sure to specify what it should and should not consider, such as proposal guidelines, sponsor priorities, timelines, past meeting goals, or desired deliverables. The more clearly bounded the task, the more useful the output will be.
For example, a faculty member may have a strong core idea for a project but need to decide how best to position it for different potential sponsors. They can gather abstracts of recently funded awards from publicly available sponsor databases and ask the LLM to identify recurring features: what kinds of problems have been funded recently, whether there are common themes, what level of risk appears to be supported, and whether those patterns have shifted over the last few years. That output can then inform how the researcher frames the same work differently for different sponsors. The LLM is not deciding the strategy, but it can help synthesize patterns that support more thoughtful positioning.
Where to be Careful
Like a toddler in a grocery store, given free rein, AI will keep wandering unless you set boundaries. It is your job to keep it reined in. If you and AI are generating a meeting agenda and it is too long, tell it to make it shorter. If you ask it for a list of research payoffs, make them aligned to sponsor needs.
At the same time, in the example above, do not let patterns from prior awards dictate your intellectual direction. Those patterns can inform your strategy, but they should not push you into distorting your idea or chasing trends that do not genuinely fit your work.
Realization
Realization is where an idea comes to exist in the physical world. It involves performing the research, generating results, creating knowledge, tools, models, and technologies. It is anything that exists as a result of the work you did.
Here, it is important to again remember that generative AI does not create; it synthesizes. It responds to your queries, so you need to know what you want and how to ask for it clearly.
How to Use AI
One useful principle at this stage is prompt engineering, which simply means asking clearly for the output you need. A strong prompt reduces drift and ambiguity and can often get you a useful result in one query rather than several. Because prompting practices vary across systems, it is worth learning what works best in the particular LLM you are using.
By this stage, you know your work. You are the expert. You know what outputs you need, and you are capable of developing them without the use of AI. With good prompting, AI can dramatically accelerate this process. Tasks that once took weeks or months may now take a few hours. Learn about your particular AI system and the best practices for prompting, and you will fundamentally improve the Realization of your work.
For example, a faculty member may have hosted a summer outreach program for middle school students and want to describe that work effectively in a report, progress update, or future proposal. They can provide the LLM with the associated organization’s website, the summer program description, and any relevant materials used during the program, then ask it to draft a concise paragraph summarizing the program’s purpose, the faculty member’s leadership role, and the success of the effort. Because the faculty member ran the program, it is easy for them to review the draft, verify its accuracy, and refine it as needed.
Where to be Careful
At this stage, remember that you are the expert and the LLM is the tool. It can help synthesize and communicate your ideas, but it does not replace your judgment. Having a powerful tool at your disposal does not guarantee that you know how to use it well.
The model can help package and communicate the work, but it should not exaggerate the impact or invent outcomes that were not achieved. You still need to review the output carefully and make sure it reflects the work accurately and credibly.
In conclusion, many of the concepts above apply across all stages of the research process. I have tried to highlight which ones are most important at each stage, but providing context, proceeding logically, setting boundaries, and writing effective prompts are principles that span the entire Spark-to-Realization arc.
There are other AI concepts that may be useful to your research, such as retrieval-augmented generation, system prompts, and customized chatbots. In practice, however, these are best learned and applied incrementally.
Next week we will turn to the practical question of how to identify and leverage the AI tools available through your institution and periodically assess how well they are fitting into your workflow.
We shape our tools and thereafter our tools shape us.
3 Safe, Practical Ways to Use AI in Your Quest for Research Funding
In this video, I walk through three practical and relatively low-risk ways to use AI in support of research funding, all centered on a simple principle: use AI to organize and refine your own ideas rather than generate new ones for you. I focus especially on dictation-based workflows, which can save substantial time while preserving your voice, expertise, and judgment. I also share a few cautions about skill development, perception, and when AI use is more or less appropriate.
When you are ready, here’s how we can help
Need to get your research funded, this year? Check out our 12-week program to get you there.
Check out our storefront where you can access our free Unlocking DOD Funding for University Researchers course and other resources, including for faculty applicants.
Ready to book a call to discuss how our program can support faculty at your institution? Let’s chat!

