top of page

From Human-in-the-Loop to AI-in-the-Loop Debugging: A Novel Scheme

  • Boston Identity
  • Sep 23
  • 4 min read

AI in Development and IAM


AI is changing how software gets built. It is no longer experimental, with tools for code generation and debugging now part of daily work. In Identity and Access Management (IAM), where security and reliability are critical, AI can speed up development, catch tricky bugs, and let engineers spend more time on policies instead of logs.

Debugging custom IAM endpoints has usually been a manual loop: write code in the console, call the endpoint, check logs, add logger.error, and repeat until it works. This approach is effective but slow and repetitive.

In this post, we show how an AI-in-the-loop approach makes debugging faster and less painful. Using a language model to analyze logs, suggest fixes, and generate code helps shorten cycles and reduce manual effort. We use Claude Code, but any capable assistant would work.

 


Debugging Endpoints the Old Way


Before AI, debugging was largely manual. The usual process is to cycle through code changes and log reviews over and over. The process looked like this:


ree

 

  1. Write JS in the UI

Add or modify scripts directly in the IAM admin console.


  1. Call the custom endpoint

Use REST calls (GET, POST, DELETE) from Postman or curl.


  1. Check logs and responses

Pull logs through the logging API or inspect messages in ELK.


  1. Update the script 

Add logger.error or adjust logic based on the output.


  1. Verify and repeat 

Test again until the response matches expectations.


As AI has brought in so many possibilities, one interesting way is to have AI help to do development/debug work.



AI-in-the-Loop Debugging Workflow


Adding an AI assistant makes the workflow more efficient while keeping a human supervisor in control:


ree


  1. Export configurations/code with CLI to local

Work locally instead of editing scripts in the UI.


  1. Use AI for code changes

Share logs and snippets with the assistant and ask it to suggest fixes.


3.    Review AI-suggested code

Human supervises the AI suggestion, then auto re-imports the script.


  1. Trigger endpoints with curl

Run REST calls from the command line and gather logs


5.    Verify through logs

Confirm the issue is fixed using the logging API or ELK.


This approach shortens the cycle and makes debugging clearer and more consistent, so engineers can focus on secure, scalable workflows instead of repetitive log tracing.

Next, let’s take a look at two examples on how AI is accelerating development and debugging.

 


Claude Code Modifies and Enhances Test Journeys


We integrated Claude Code into our IAM workflow by training it on authentication journeys in Ping Advanced Identity Cloud (PAIC). Using Frodo CLI, we built custom commands for journey operations, exported existing journeys as references, and created a new test journey. For debugging, we added pctl with ELK streaming to capture logs in real time and documented the process in a claude.md file.

 

A Simple Journey

Our first test was to see if Claude could replicate a simple authentication journey. It produced a 165-line analysis that mapped the five-node flow (username, password, email, scripted decision, data store), documented configurations, and flagged our debug scripts for ELK monitoring. It also confirmed our Frodo CLI setup with the --use-string-arrays -N flags, showing it can support both journey design and debugging.


ree


After analyzing our test journey, we asked Claude Code to modify it by adding message nodes for success and failure outcomes. It updated the JSON to include the correct attributes, mapped the outcomes, and integrated the nodes into the Data Store Decision logic.


ree

In VS Code, it proposed changes to the Test_Journey_New.journey.json file that we could accept or reject. The workflow was straightforward:

·      import the journey

·      export it with Frodo CLI using the --use-string-arrays -N flags

·      apply the modifications.

 

After importing the updated journey into PAIC, the editor showed the new flow exactly as designed: Success and Failure message nodes connected to the Data Store Decision outcomes. Testing confirmed it worked—successful logins displayed “Authentication Successful! Welcome to the system” and failed attempts showed “Authentication Failed! Please check your credentials and try again.”


 

ree

 

ree

 


Claude Code in IAM Journey Debugging


We asked Claude to query the Elasticsearch API for the debug information our journey was capturing. It generated a curl command:

curl -X GET "localhost:9200/paic-logs-*/_search?pretty" -H "Content-Type: application/json"

The query targeted PCTL_ELK_DEBUG and USER_INPUT_CAPTURED messages, sorted by timestamp, limited to the 10 most recent entries. Claude explained that username and email inputs would now be logged, and offered options to run the search, save the command, or adjust the approach—demonstrating its understanding of both journey design and debugging.


ree

 

The query revealed that usernames were logged correctly, but the email field showed "NOT_CAPTURED". Claude identified that the email collection step might be skipped or left empty, and suggested exporting the full nodeState for deeper debugging. It provided JavaScript to capture all email field variations (mail, email, emailAddress, userEmail) and analyzed the nodeState dump.

Furthermore, Claude found that usernames and passwords logged successfully, but all email fields returned null because the Attribute Collector Node was optional (required: false). It proposed three solutions:

·      Make the email field required,

·      Check if the collection step was bypassed in the journey, or

·      Verify email entry during testing



ree

ree

 


Conclusion


This post shows some interesting ways of leveraing AI during identity development by taking on repetitive debugging and adjustments, letting developers focus on design, security, and user experience.

 
 
bottom of page