Mastering AI: A Cancer Patient’s Guide to Utilizing AI
By Nicholas Zuk
When faced with a cancer diagnosis, navigating the immense amount of information available can be a draining and overwhelming task. The countless blogs, forums, journals, and websites filled with conflicting advice and complex medical jargon can leave patients with more questions than they began. What’s more, most of these searches result in information that is not personalized or necessarily reflective of the information a patient seeks. However, with the introduction of artificial intelligence (AI) search engines and platforms such as OpenAI’s ChatGPT, Google’s Gemini, Claude, and Perplexity now offer cancer patients the ability to get personalized information and learn about other resources available that can help users better understand their diagnosis, explore treatment options, and even help manage daily routines or challenges. While these platforms can provide extensive assistance, learning to properly utilize them to get the most relevant and helpful responses can be challenging.
Prompt engineering is the process of crafting practical questions to get an AI program’s most helpful and pertinent responses (McKinsey & Company, 2024). With AI, knowing what to ask and, more importantly, how to ask a question makes the difference between a vague, generalized response and tailored actionable advice. Learning how to craft prompts for AI better can enhance patients’ ability to research their condition without the confusion searching the World Wide Web can cause. Whether the goal is learning more about treatment side effects, questions to ask a physician, ways to mitigate chemo fatigue, or even finding other platforms with additional resources, understanding how to prompt AI effectively can be invaluable.
Learning how to prompt engineer effectively entails asking the AI questions in a way that forces it to provide clear, specific, and personalized answers that directly address the question. The precision and applicability of the AI’s answer depend entirely on the detail and context provided by the prompt. For example, a nonspecific, vague prompt such as “What should I ask a doctor as a cancer patient?” is not an effective prompt because it will likely return a list of generic answers that are not helpful for a patient’s specific condition. A good prompt is specific in what it asks and has a narrow focus that constrains the AI to provide applicable information. An example would be, prompts like “Explain how immunotherapy works step-by-step for stage 2 lung cancer” or “What are five common coping strategies for managing chemo fatigue?” which can lead to highly actionable and targeted insights. An example sample structure to mimic would follow this format: “I am a __ year old __(sex/gender) that is suffering from _ cancer. What are the 10 most important questions I should ask my oncologist about __ (treatment options, recovery, etc).” This is a more effective prompt because it includes key factors relevant to the patient’s circumstances and asks for guidance on a particular topic. By being as specific as possible, irrelevant information is filtered out.
Additionally, including context in prompts is an essential part of prompt engineering. Adding descriptors such as age, gender, diagnosis, treatment stage, and any additional information relevant to a prompt significantly improves the AI’s response. The more information given in the initial prompt, the less time is spent writing follow-up prompts to receive the desired information. Sometimes, however, refining or asking follow-up prompts is necessary.
Breaking complex topics into step-by-step requests can be another useful method for getting information on complex issues. Because some medical concepts and treatment plans can be difficult to understand, asking the AI model to explain the topics step-by-step can help the reader better digest the material. To do this, prompt the AI to “List the steps in…” or to “Explain __ in simple terms.” Further, while asking about multiple things in a single query can be tempting, combining too many topics can confuse the AI and lead to fragmented answers. For example, asking, “What are the symptoms of breast cancer, what are treatment options, and what are the effects of chemotherapy for it”? Combines too many questions about breast cancer and will result in an elementary answer. Instead, this prompt should be separated into three separate, focused prompts. First, ask something like, “What are the most common symptoms of breast cancer.” The second prompt could be “What are common treatment options for each stage of breast cancer?” Finally, the third prompt could be “What are the side effects of __ treatment for stage __ (1,2,3) breast cancer.” Splitting the original prompt and narrowing the focus into these three questions will prevent the AI from being overwhelmed and allow it to engage more with each topic. Again, keeping prompts concise and focused is key to getting the most helpful information.
Another useful AI tool is being able to create lists, summaries, comparisons, plans, or even spreadsheets to display information in a desired format. Include phrases like “Create a list/graph/table displaying information on __” or “Summarize/compare the benefits of proton therapy in prostate cancer treatment” to instruct the AI how to deliver the content. Follow-up questions like “Compare recovery times for __ (kind of treatment) versus __ (kind of treatment)” or “Give examples of common exercises I can do during chemotherapy to reduce fatigue” can be used to help a patient better understand the information the AI previously provided.
Because AI is interactive, it can also pretend to serve a role and provide information from that perspective. Sometimes, framing a query as a specific role can yield more detailed or nuanced answers. For example, assigning the AI to “Act as an oncologist treating stage 2 colon cancer utilizing CAPOX chemotherapy” will encourage it to provide responses tailored to a specific perspective or expertise.
In addition to researching medical details, AI can be useful for providing practical guidance for managing life with cancer. Being diagnosed with cancer is a disruptful event that can wreak havoc on the most organized lives. The fatigue, side effects, and medical appointments that come with cancer and treatment can make maintaining schedules very difficult. However, AI is an excellent tool for creating new schedules and routines or can even be a source of self-care tips. Prompts such as “Help me create a daily routine that accommodates my chemotherapy __ (input all daily activities here)” will provide enough information for AI to suggest detailed schedules that a patient can pick from, alleviating the mental burden of planning a new schedule independently. Further, AI can assist in providing practical advice for dealing with challenges such as managing side effects, learning how to advocate for oneself during treatment, or even ways to find a support group. A prompt like “List five ways I can advocate for myself at my dismissive oncologist’s office (listing specific examples will help) ” can help patients feel reassured and prepared to face their diagnosis.
While AI is an invaluable tool with various functions, it has limitations. It is essential to double-check all critical information it provides against credited medical resources and healthcare professionals. AI uses data from across the internet to formulate its responses, AI hallucination can occur in which the platform “hallucinates” information or sources that do not exist (IBM Think, 2024). This can cause responses that provide misinformation. If the information an AI produces seems suspicious, asking the model to give sources for its answer is a good way to double-check the authenticity of its response. Common ways to spot potential misinformation are to look for overly vague or confident answers that are not cited and inconsistencies in a response (contradictory statements or overly generic advice). Further, despite its ability to give advice and explain complex diagnoses, AI is not a doctor, and only real healthcare providers can assess specific medical situations and provide appropriate recommendations. Instead, AI should be used as a supplemental medical advice tool to clarify questions, provide summaries of complicated topics, or to find other sources to explore. Using AI effectively requires a balance between trust and skepticism. While it can offer incredible assistance, AI, as we know it, is very new and makes mistakes. The safest way to utilize its abilities for specific medical information should be to use its responses as starting points for personal research, not as final answers.
Navigating life with cancer and identifying relevant information about a patient’s medical condition can be greatly enhanced with AI. By crafting specific, content-rich prompts, breaking down complex queries, and refining follow-up questions, patients can access clearer, more relevant answers that can boost their knowledge and ability to deal with their diagnosis. At the same time, however, understanding AI’s limitations is vital to using its tools effectively. While AI can offer valuable insights, it is not a substitute for professional medical advice. When used thoughtfully, AI can be a powerful ally and offer clarity, emotional support, and practical guidance tailored to individual needs. The key lies in understanding how to ask the right questions and how to verify the answers it provides. By doing this, patients can enable themselves to take a more active role in their care and ultimately make their journey through cancer more informed and manageable.
References
IBM Think. (2024). What are AI Hallucinations? Retrieved from https://www.ibm.com/think/topics/ai-hallucinations
McKinsey & Company. (2024). What is Prompt Engineering? Retrieved from https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering