← Back to Blog

A Clinician’s Primer on AI-Assisted Coding

• By John Britton

In 2025, vibe coding took off as a name for building with AI. For clinicians, what matters is not the phrase itself but the shift behind it. AI can now help create small, useful tools, not just answer questions or generate text. That opens the door to building resources that better fit a specific client, workflow, or practice need, even without formal programming experience.

This is a shift from using AI to answer questions to using it to build things.

This primer will cover what AI-assisted coding is, why it matters for clinicians, what can realistically be built without a programming background, how to get started using simple workflows, and how to think clearly about privacy, publishing, and competence when building and sharing tools.

What AI-Assisted Coding Is

AI-assisted coding, or vibe coding, means using AI to generate, edit, explain, and troubleshoot code in order to create websites, programs, and software. It exists along a spectrum, from a non-programmer asking a chatbot for a simple HTML file, to more approachable tools with built-in previews and publishing, to IDEs and other coder-facing tools designed for more technical users.

Why does this matter for clinicians? The value of AI-assisted coding is in how quickly and specifically it allows people to build things that fit their own needs and preferences. The barrier to entry is low enough that, with minimal training and some experimentation, clinicians can start creating simple tools. That creates a real opportunity to build or adapt tools that better match a given practice, workflow, or clinical approach, rather than relying entirely on what already exists.

What Clinicians Can Build

Client-Facing Use

  • A body scan for a client with chronic pain that skips certain areas and guides attention through tolerable regions, while incorporating imagery of a lake the client already identifies as calming.
  • A two-minute panic tool a college student can use in class, with short prompts like “feet on the ground” and “find five blue objects,” along with a brief explanation of what is happening in their body.
  • A parent support tool that walks through how to respond to a child’s outburst using language the caregiver has already found effective.
  • A sleep routine helper that reflects what a client with insomnia has actually been able to follow through on, rather than an idealized routine.
  • An exposure ladder tool built around a specific feared situation, with language and steps that match how the client actually describes their anxiety.
  • A journaling tool where the client chooses from prompts they helped select, types freely, and then sees what they wrote reformatted into a clean, readable layout they can save or revisit.

Creating small apps, worksheets, and helpers like this is just the beginning. The value is not just that they exist, but that they can reflect client preferences and known needs in ways that generic tools rarely do.

Professional or Internal Use

  • A personalized treatment manual that pulls together CBT, ACT, and exposure strategies into one searchable system, organized around how the clinician actually conceptualizes cases. For example, a panic section that includes interoceptive exposures, common avoidance patterns, and short scripts the clinician tends to use when explaining the panic cycle.
  • A treatment planning tool built from the clinician’s own general templates, de-identified examples, and preferred interventions, which can generate draft treatment plans or session outlines in their usual style and sequencing.
  • A session prep tool that functions like a structured playbook, where selecting a problem area like OCD or insomnia brings up a sequence of interventions, examples, and phrasing the clinician already tends to use, similar to treatment ebooks but personalized and easier to update.
  • A screener interpretation helper that returns a concise interpretation, key follow-up questions, and clinical considerations, without storing identifying information.
  • A differential diagnosis helper where the clinician inputs key symptoms and gets a structured set of rule-outs, red flags, and follow-up questions, organized in a way that matches how they were trained to think through cases.
  • A treatment fidelity helper for a specific modality, such as CPT, where the core steps, sequencing, and common pitfalls are laid out in a quick-reference format, along with reminders about the parts the clinician knows they tend to forget in session.

This category works best when it reflects how the clinician already thinks and practices, rather than introducing a new system they have to learn.

Business or Practice Use

  • An hours tracking tool that automatically groups sessions into two-week pay periods, calculates totals based on typical schedules, and formats the output for practicum, internship, or supervision reporting.
  • A personal or practice website built and updated without relying on a third-party platform, including service pages, FAQs, and referral information that reflect how the clinician actually describes their work.
  • A referral or resource page that can be quickly updated with local providers, support groups, or crisis options, organized in a way that fits the clinician’s typical recommendations.
  • A fee estimator or insurance explainer that helps clients understand typical costs, out-of-network benefits, or common billing scenarios using clear, plain language.
  • A spreadsheet-style tool that tracks sessions, cancellations, payments, or supervision hours, automatically calculates totals, and formats the information for reporting.

This category tends to be the most immediately useful, since it focuses on reducing small but repeated administrative burdens.

Most of these tools are much simpler than they sound. Many can be built as a single file, opened in a browser, and used immediately. Others can be created in tools that show a live preview and handle sharing or publishing for you. The point is not to build complex software. It is to create something small that fits a real need.

The Easiest Ways to Start

The easiest way to start is with a free plan on a major AI provider’s website and a simple prompt. Many small tools can be created this way without touching a formal coding environment. For a first attempt, it helps to be specific about the purpose of the tool, the audience, the visual style, and the output format you want.

For example, a clinician could paste in a prompt like this:

Create a single self-contained HTML file for a calming breathing exercise for young children. The page should show colorful bubbles that slowly float around the screen while growing and shrinking in size to guide breathing. Use warm, friendly colors and make the design feel playful, simple, and soothing. Include a short title and one brief instruction sentence for the child. Add a start and pause button. Keep everything in one HTML file with embedded CSS and JavaScript so it can be saved and opened directly in a browser with no other dependencies.

That is enough to get a first version. From there, the user can open the file in a browser, see what works, and ask the AI to revise it. For example, the user might ask it to slow the animation, make the bubbles larger, simplify the text, add a visual cue for inhale and exhale, or change the colors for a different age group.

If the chatbot being used has a thinking mode, reasoning mode, or similar option, it may help to turn that on for a first build, especially when asking it to create something from scratch or troubleshoot errors. That is not always necessary for very simple tools, but it can improve the quality of the first draft and make the AI more likely to follow multi-step instructions.

Most of these chatbots now include a preview pane for code, so the user can often test the tool immediately in the browser. If not, the user can save the file and open it directly, which for a single HTML file usually means just clicking it and having it run in the browser on their own computer. In many of these simple cases, the tool is running locally rather than being deployed anywhere.

From there, the process is usually iterative. If there is an error, the user can paste the error message back into the AI and ask it to fix it. If the colors, text, animation speed, or layout are off, the user can simply describe what should change and try again. Often by the second or third iteration, the basic product is already there.

The next step up is using tools built more specifically for AI-assisted coding, such as Google AI Studio, Replit, Bolt, Claude Artifacts, and similar platforms. Some have free tiers, while others require payment for heavier use or more advanced features. These tools can make the process easier by offering built-in previews, faster iteration, and in some cases simple sharing or publishing, which can be especially helpful for readers who want more than a single local HTML file but are still not interested in using a traditional coding setup.

A more technical path uses tools that work more directly with code, like Codex, Cursor, or similar environments. These have been more programmer-facing, but they are getting easier to use as AI takes on more of the explaining, setup, debugging, and revision. The same basic process still applies. You ask the AI to build something, explain what it changed, fix errors, and keep iterating.

This is the kind of workflow used to build larger projects like LocalScribe, where there are multiple files, moving parts, and dependencies rather than one simple page. At the same time, newer tools like Claude Code and similar cowork-style tools are making more of that power available in ways that feel less tied to traditional programming, which lowers the barrier for people who are not technical but still want to build more ambitious things. Many clinicians will still do best by starting simple.

Privacy, Publishing, and Data

Now we get to where many people, especially in healthcare, would shy away, assuming any tool they create will automatically be a HIPAA violation if they share it with clients. That is not quite right, but it depends entirely on how the tool is built and used.

Most of the tools described above do not involve transmitting or storing clinical data at all.

The more useful way to think about this is not whether AI or code was involved, but what data the tool uses, where that data goes, and whether anything identifiable is stored or transmitted.

There are three main places data can live:

  • locally on a device
  • in the browser on a device
  • on a server or external system

Each of these has different practical and privacy implications.

Local Files and Self-Contained Tools

For many of the examples above, the end product is a single HTML file or a small group of files in one folder. These can be used by the clinician on their own device or, if the tool is for a client, sent to the client to use on their own device. In many cases, the tool functions more like an interactive handout, guided exercise, or structured worksheet than a traditional app.

From a practical standpoint, this is simple. The clinician creates the file, opens it in a browser, and can email or share it like any other document. The client opens it and uses it locally. Nothing is transmitted anywhere unless the user chooses to send it.

From a privacy and ethics standpoint, the key question is whether the tool collects or stores any information. For many tools, no data is gathered at all. A breathing exercise, grounding tool, or psychoeducational page may simply run and close. It is basically a souped-up handout or guided meditation. In other cases, such as a bespoke mood or behavior tracker, information might be saved only on the client’s own device and not transmitted back to the clinician.

That distinction matters. A tool that runs locally without transmitting data is very different from a tool that sends or stores information elsewhere.

Informed consent still matters here. Clients should understand what the tool is, what it does, what it does not do, and where any information stays. Even when risk is low, transparency is part of responsible use. As with any handout, worksheet, or intervention shared in treatment, the clinician is still responsible for using judgment about what is appropriate to provide.

Browser-Based Tools and Simple Hosting

Some tools are shared as links rather than files. These may be hosted on simple platforms like Netlify, or created in environments like Claude Artifacts or Google AI Studio that allow quick sharing.

From a practical standpoint, this makes tools easier to access. A client can open a link without downloading anything, and the clinician can update the tool in one place rather than resending files.

A website does not automatically mean data is being collected or stored on a server. Many simple tools run entirely in the browser and do not transmit any information. In these cases, they function similarly to a local file, just accessed through a link instead. Some tools may also save information only inside the browser on that one device. That still does not mean it is being sent back anywhere, but it does mean the information may persist locally in that browser until the user clears it.

One simple option is free hosting through Netlify. A clinician can create a free account, even with a separate email address, upload the folder containing the tool, and get a public link without needing to buy a website domain. That link can then be shared directly with anyone, including a client, without needing to send files back and forth. For simple static tools, it is a fairly seamless way to make something accessible.

However, once a tool includes inputs, tracking, or anything that sends data outward, the situation changes. Clinicians need to understand how the specific tool behaves. The key question is not whether it is a website, but whether it is transmitting or storing information beyond the user’s device.

Tools That Transmit or Store Data

The boundary becomes much clearer once a tool sends information back to the clinician, stores it on a server, or integrates into ongoing care.

From a practical standpoint, this includes things like intake forms, dashboards, trackers that sync across devices, or any system where the clinician can view client-entered data.

From a legal and ethical standpoint, this is no longer the same category of simple tool. Once identifiable health information is being created, received, stored, or transmitted in connection with care, HIPAA and related obligations become central. This typically requires appropriate platforms, security measures, and in many cases a business associate agreement.

Informed consent is still necessary here, but it is not sufficient on its own. The responsibility shifts from simply explaining what the tool does to ensuring that the way information is being handled is appropriate and protected.

Keeping Clinician Privacy in Mind

A separate issue is the clinician’s own privacy when sharing a web app publicly.

In most cases, platforms like Netlify, Claude Artifacts, Replit, or Bolt provide a public link that does not directly identify who created or uploaded the tool. That means a simple client-facing tool can often be shared without exposing personal or professional details.

The main differences come down to how the tool is shared. A self-contained file sent directly to a client stays more private between the clinician and client. A hosted link is easier to access but exists in a more public space.

The risk is usually indirect rather than explicit. If the link is shared alongside a clinician’s name, website, or email, then it becomes connected. If it is shared more selectively, it may remain effectively separate. For most simple tools that do not collect or transmit data, this is less about formal privacy risk and more about how much a clinician wants the tool tied to their identity or practice.

Informed consent can matter here too, especially if a clinician is sharing a custom-built tool directly with a client as part of care. The client should understand what the tool is, that it is not a replacement for clinical judgment or emergency support, and whether its use involves any third-party platform or public URL.

The practical takeaway is simple. A self-contained file shared directly with a client usually exposes less than a publicly hosted link. A public URL may be more convenient, but it can also create more connection between the tool and the clinician’s practice if it is shared in public-facing ways. Those are separate questions from HIPAA, but still worth thinking through.

Clinician Competence

Clinicians do not need to become programmers, but they do need enough understanding to use these tools responsibly. That means knowing what they are building, keeping it as simple as possible, understanding what the tool does with data, recognizing when privacy concerns actually apply, and noticing when something has moved beyond what they can reasonably manage. In that sense, AI use in clinical work is also a competence question.

Conclusion

Clinicians have at their fingertips the possibility of making personalized, client-specific experiences, activities, assistance, and psychoeducation that are highly relevant to presenting problems. As clinicians, we can be empowered to elevate the experience of our clients, rather than relying on the same free three-column thought record with awkward wording that we’ve used for 10+ years.

This does not require becoming a programmer or building complex systems. In most cases, it starts with something small, simple, and specific to one need. Over time, those small tools can add up to a more flexible and responsive way of working that better fits both the clinician and the client.

Subscribe for future posts

If you want new writing at the intersection of AI and psychology, ethics, and implementation of AI in clinical practice, subscribe on Substack.

Subscribe on Substack

The views expressed here are my own and do not necessarily reflect the views of any current or future employer, training site, academic institution, or affiliated organization.