<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Prompt Engineering on Alfero Chingono</title><link>https://www.chingono.com/tags/prompt-engineering/</link><description>Recent content in Prompt Engineering on Alfero Chingono</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Fri, 17 Apr 2026 07:57:23 -0400</lastBuildDate><atom:link href="https://www.chingono.com/tags/prompt-engineering/index.xml" rel="self" type="application/rss+xml"/><item><title>How I Wired Up an AI Tutor to Teach Like a Socratic Mentor — Not a Cheater</title><link>https://www.chingono.com/blog/2025/08/05/how-i-wired-up-an-ai-tutor-to-teach-like-a-socratic-mentor-not-a-cheater/</link><pubDate>Tue, 05 Aug 2025 09:00:00 +0000</pubDate><guid>https://www.chingono.com/blog/2025/08/05/how-i-wired-up-an-ai-tutor-to-teach-like-a-socratic-mentor-not-a-cheater/</guid><description>&lt;img src="https://www.chingono.com/blog/2025/08/05/how-i-wired-up-an-ai-tutor-to-teach-like-a-socratic-mentor-not-a-cheater/cover.png" alt="Featured image of post How I Wired Up an AI Tutor to Teach Like a Socratic Mentor — Not a Cheater" /&gt;&lt;p&gt;The problem with most &amp;ldquo;AI tutors&amp;rdquo; today is simple: they are too helpful. If a student asks &amp;ldquo;How do I solve for x?&amp;rdquo;, a generic LLM will often just show the steps and give the answer. In an educational context, that isn&amp;rsquo;t teaching; it is a shortcut to a finished worksheet with zero retention.&lt;/p&gt;
&lt;p&gt;When I started building &lt;a class="link" href="https://www.chingono.com/blog/2025-05-08-teaching-kids-to-code-with-bayesian-knowledge-tracing-why-i-built-firefly/" &gt;FireFly&lt;/a&gt;, my goal was the opposite. I wanted an AI that would act like a &lt;strong&gt;Socratic Mentor&lt;/strong&gt;. It should never give the answer directly. It should only ask the next right question to help the student find the answer themselves.&lt;/p&gt;
&lt;p&gt;That turned out to be a surprisingly hard technical challenge.&lt;/p&gt;
&lt;h2 id="the-helpful-assistant-bias"&gt;The &amp;ldquo;Helpful Assistant&amp;rdquo; Bias
&lt;/h2&gt;&lt;p&gt;Large Language Models (LLMs) are trained to be helpful assistants. Their default behavior is to minimize the &amp;ldquo;effort&amp;rdquo; for the user. In education, you actually want to &lt;em&gt;maximize&lt;/em&gt; the student&amp;rsquo;s cognitive effort within a safe range (the Zone of Proximal Development).&lt;/p&gt;
&lt;p&gt;To break the LLM&amp;rsquo;s habit of just giving the answer, I had to move beyond simple system prompts and build a more structured interaction loop.&lt;/p&gt;
&lt;h2 id="three-layers-of-a-socratic-ai"&gt;Three Layers of a Socratic AI
&lt;/h2&gt;&lt;p&gt;In FireFly, the &amp;ldquo;AI Tutor&amp;rdquo; isn&amp;rsquo;t just one prompt. It is three distinct layers working together.&lt;/p&gt;
&lt;h3 id="1-the-knowledge-layer-bkt"&gt;1. The Knowledge Layer (BKT)
&lt;/h3&gt;&lt;p&gt;Before the AI says a word, the system checks the student&amp;rsquo;s current mastery using &lt;a class="link" href="https://www.chingono.com/blog/2025-05-08-teaching-kids-to-code-with-bayesian-knowledge-tracing-why-i-built-firefly/" &gt;Bayesian Knowledge Tracing (BKT)&lt;/a&gt;. If the system knows the student is 90% likely to understand &lt;code&gt;loops&lt;/code&gt; but only 10% likely to understand &lt;code&gt;nested loops&lt;/code&gt;, it passes that context to the LLM.&lt;/p&gt;
&lt;p&gt;The prompt becomes: &amp;ldquo;The student understands X, but is struggling with Y. Ask a question that bridges the gap.&amp;rdquo;&lt;/p&gt;
&lt;h3 id="2-the-socratic-constraint"&gt;2. The Socratic Constraint
&lt;/h3&gt;&lt;p&gt;The core prompt for the FireFly tutor is built around strict negative constraints:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;NEVER&lt;/strong&gt; provide the full solution.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NEVER&lt;/strong&gt; point out the exact line of the error.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ALWAYS&lt;/strong&gt; ask a leading question.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ALWAYS&lt;/strong&gt; validate the student&amp;rsquo;s &lt;em&gt;process&lt;/em&gt;, not just their output.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If the student is stuck on a syntax error, the AI might say: &amp;ldquo;I see you&amp;rsquo;re trying to repeat a block of code. Have you looked at where your curly braces are starting and ending?&amp;rdquo;&lt;/p&gt;
&lt;h3 id="3-the-age-adapted-tone"&gt;3. The Age-Adapted Tone
&lt;/h3&gt;&lt;p&gt;Teaching a 7-year-old is different from teaching a 15-year-old. FireFly uses different &amp;ldquo;personas&amp;rdquo; depending on the user&amp;rsquo;s profile. For younger kids, the tone is more encouraging and uses metaphors (like &amp;ldquo;the computer is a very literal robot&amp;rdquo;). For older students, the tone is more technical and precise.&lt;/p&gt;
&lt;h2 id="dealing-with-the-just-tell-me-frustration"&gt;Dealing with the &amp;ldquo;Just Tell Me&amp;rdquo; Frustration
&lt;/h2&gt;&lt;p&gt;One of the biggest challenges in Socratic teaching is student frustration. When an AI keeps asking questions instead of giving answers, some students will just keep asking &amp;ldquo;Just tell me the answer.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;To handle this, FireFly has a &amp;ldquo;Frustration Fuse.&amp;rdquo; If the student asks for the answer three times in a row, the AI is allowed to provide a &lt;em&gt;hint&lt;/em&gt; that is slightly more direct, or it can offer to &amp;ldquo;reset&amp;rdquo; the problem to an easier version. This keeps the student engaged without breaking the pedagogical goal.&lt;/p&gt;
&lt;h2 id="why-this-matters"&gt;Why This Matters
&lt;/h2&gt;&lt;p&gt;We are entering an era where AI-powered personalized learning will be the norm. But if we just build &amp;ldquo;answer machines,&amp;rdquo; we are doing a disservice to the next generation of learners.&lt;/p&gt;
&lt;p&gt;Building FireFly taught me that the most powerful use of AI in education isn&amp;rsquo;t knowing everything. It&amp;rsquo;s being a patient, persistent, and occasionally annoying mentor who refuses to let the student take the easy way out.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Related reading:&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://www.chingono.com/blog/2025-05-08-teaching-kids-to-code-with-bayesian-knowledge-tracing-why-i-built-firefly/" &gt;Teaching Kids to Code With BKT: Why I Built FireFly&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.chingono.com/blog/2026/01/15/sandboxed-code-execution-for-kids-how-judge0-and-python-sys-settrace-power-firefly/" &gt;Sandboxed Code Execution for Kids: How Judge0 and Python sys.settrace Power FireFly&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>