📣Special Alert: We have published the fourth interview in our series of discussions with former program managers, officers, and directors. Scroll down for the link.
Over the past two years, AI tools have moved from novelty to routine. Researchers now find themselves using AI in some capacity almost daily.
When it first became widely available, much of the discussion around AI and research funding was theoretical. PIs debated for which purposes AI should be used, what the risks might be and what rules might apply. Those conversations were important, particularly because policies and guidelines were still emerging. Researchers understandably wanted to avoid situations in which improper use of AI might create problems such as the disqualification of a proposal.
But today we are in a different phase. Instead of focusing on what AI means in theory, we can begin examining how these tools are being used in practice and, most importantly, how you are using them.
Thus, now is a good time to step back and think more deliberately about how AI fits into one’s work, especially in the context of applying for and securing funding.
The challenge for researchers is deciding how these tools fit into their intellectual practice.
In this and the next two newsletters, we will explore this question from three angles. Today we focus on developing a deliberate (or defined) relationship with AI. Next we will discuss how to maintain discipline in your thinking while using AI. Finally, we will consider how to develop a regular protocol for assessing your use of AI and evaluating its results.
By developing an intentional/deliberate relationship with AI, we mean that researchers should consciously decide how AI fits into their work, rather than allowing patterns of its use to develop ad hoc.
It is easy for AI to enter a workflow through small incidents of convenience. For example, you might ask an AI tool to draft responses to routine emails, perhaps even clicking on suggested edits that appear while composing a message. You might ask it to summarize papers that you have not yet read closely, or to suggest a possible outline for a proposal section.
None of these uses is necessarily problematic. The important question is whether they were chosen deliberately or if they have quietly become part of your routine workflow without a conscious decision or reflection.
Small moments of convenience evolve into habits. Over time, those habits begin to shape how work gets done. Misuse of AI is certainly a concern, and it can take subtle forms, including unexamined use. It can be valuable to periodically pause and ask a few simple questions: How am I currently using AI? How did these habits develop? Are they actually helping my work?
A second consideration is clarifying your boundaries around the use of AI.
Different researchers will draw boundaries in different places. Some may be comfortable using AI as a thought partner when outlining ideas. Others may find it useful for summarizing literature or organizing notes. None of these choices is inherently right or wrong. What matters is that the boundaries are chosen consciously rather than drifting into place via habit.
It is also worth noting that boundaries may shift depending upon the task. For one project, you may be comfortable using AI to help summarize background literature. In another situation, you may prefer to work through the material entirely on your own. In some cases, you might use AI as a supporting tool while doing most of the work yourself.
The point is not to create rigid rules, but to ensure that the way you use AI reflects deliberate choices rather than default behavior.
Thirdly, remember that AI is meant to support your workflow and research process. It should not automatically become the default starting point for every task.
Some activities may benefit from AI assistance, while others may be better approached without it. The goal is not to maximize AI usage, but to apply it thoughtfully where it genuinely adds value.
Once you start thinking more deliberately about your relationship with AI, another question naturally follows: how do these tools affect the way we think while using them?
Next week we will explore how to protect one’s thought process when working with AI.
The real problem is not whether machines think but whether men do.
Don’t wait to engage DARPA: A former PM explains
Many faculty wait for the “perfect” DARPA opportunity to appear—when Dr. Rohith Chandrasekar (former DARPA Program Manager, now at Leidos) argues the real advantage comes from engaging early and often. In this interview, Rohith explains how DARPA PMs think about capabilities vs. technologies, what to bring to a first conversation, and why the second meeting is often the one that really matters.
When you are ready, here’s how we can help
Need to get your research funded, this year? Check out our 12-week program to get you there.
Check out our storefront where you can access our free Unlocking DOD Funding for University Researchers course and other resources, including for faculty applicants.
Ready to book a call to discuss how our program can support faculty at your institution? Let’s chat!

