Connect with us

AI Research

How to Build a Conversational Research AI Agent with LangGraph: Step Replay and Time-Travel Checkpoints

Published

on


In this tutorial, we aim to understand how LangGraph enables us to manage conversation flows in a structured manner, while also providing the power to “time travel” through checkpoints. By building a chatbot that integrates a free Gemini model and a Wikipedia tool, we can add multiple steps to a dialogue, record each checkpoint, replay the full state history, and even resume from a past state. This hands-on approach enables us to see, in real-time, how LangGraph’s design facilitates the tracking and manipulation of conversation progression with clarity and control. Check out the FULL CODES here.

!pip -q install -U langgraph langchain langchain-google-genai google-generativeai typing_extensions
!pip -q install "requests==2.32.4"


import os
import json
import textwrap
import getpass
import time
from typing import Annotated, List, Dict, Any, Optional


from typing_extensions import TypedDict


from langchain.chat_models import init_chat_model
from langchain_core.messages import BaseMessage
from langchain_core.tools import tool


from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import ToolNode, tools_condition


import requests
from requests.adapters import HTTPAdapter, Retry


if not os.environ.get("GOOGLE_API_KEY"):
   os.environ["GOOGLE_API_KEY"] = getpass.getpass("🔑 Enter your Google API Key (Gemini): ")


llm = init_chat_model("google_genai:gemini-2.0-flash")

We start by installing the required libraries, setting up our Gemini API key, and importing all the necessary modules. We then initialize the Gemini model using LangChain so that we can use it as the core LLM in our LangGraph workflow. Check out the FULL CODES here.

WIKI_SEARCH_URL = "https://en.wikipedia.org/w/api.php"


_session = requests.Session()
_session.headers.update({
   "User-Agent": "LangGraph-Colab-Demo/1.0 (contact: [email protected])",
   "Accept": "application/json",
})
retry = Retry(
   total=5, connect=5, read=5, backoff_factor=0.5,
   status_forcelist=(429, 500, 502, 503, 504),
   allowed_methods=("GET", "POST")
)
_session.mount("https://", HTTPAdapter(max_retries=retry))
_session.mount("http://", HTTPAdapter(max_retries=retry))


def _wiki_search_raw(query: str, limit: int = 3) -> List[Dict[str, str]]:
   """
   Use MediaWiki search API with:
     - origin='*' (good practice for CORS)
     - Polite UA + retries
   Returns compact list of {title, snippet_html, url}.
   """
   params = {
       "action": "query",
       "list": "search",
       "format": "json",
       "srsearch": query,
       "srlimit": limit,
       "srprop": "snippet",
       "utf8": 1,
       "origin": "*",
   }
   r = _session.get(WIKI_SEARCH_URL, params=params, timeout=15)
   r.raise_for_status()
   data = r.json()
   out = []
   for item in data.get("query", {}).get("search", []):
       title = item.get("title", "")
       page_url = f"https://en.wikipedia.org/wiki/{title.replace(' ', '_')}"
       snippet = item.get("snippet", "")
       out.append({"title": title, "snippet_html": snippet, "url": page_url})
   return out


@tool
def wiki_search(query: str) -> List[Dict[str, str]]:
   """Search Wikipedia and return up to 3 results with title, snippet_html, and url."""
   try:
       results = _wiki_search_raw(query, limit=3)
       return results if results else [{"title": "No results", "snippet_html": "", "url": ""}]
   except Exception as e:
       return [{"title": "Error", "snippet_html": str(e), "url": ""}]


TOOLS = [wiki_search]

We set up a Wikipedia search tool with a custom session, retries, and a polite user-agent. We define _wiki_search_raw to query the MediaWiki API and then wrap it as a LangChain tool, allowing us to seamlessly call it within our LangGraph workflow. Check out the FULL CODES here.

class State(TypedDict):
   messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


llm_with_tools = llm.bind_tools(TOOLS)


SYSTEM_INSTRUCTIONS = textwrap.dedent("""
You are ResearchBuddy, a careful research assistant.
- If the user asks you to "research", "find info", "latest", "web", or references a library/framework/product,
 you SHOULD call the `wiki_search` tool at least once before finalizing your answer.
- When you call tools, be concise in the text you produce around the call.
- After receiving tool results, cite at least the page titles you used in your summary.
""").strip()


def chatbot(state: State) -> Dict[str, Any]:
   """Single step: call the LLM (with tools bound) on the current messages."""
   return {"messages": [llm_with_tools.invoke(state["msgs"])]}


graph_builder.add_node("chatbot", chatbot)


memory = InMemorySaver()
graph = graph_builder.compile(checkpointer=memory)

We define our graph state to store the running message thread and bind our Gemini model to the wiki_search tool, allowing it to call it when needed. We add a chatbot node and a tools node, wire them with conditional edges, and enable checkpointing with an in-memory saver. We now compile the graph so we can add steps, replay history, and resume from any checkpoint. Check out the FULL CODES here.

def print_last_message(event: Dict[str, Any]):
   """Pretty-print the last message in an event if available."""
   if "messages" in event and event["messages"]:
       msg = event["messages"][-1]
       try:
           if isinstance(msg, BaseMessage):
               msg.pretty_print()
           else:
               role = msg.get("role", "unknown")
               content = msg.get("content", "")
               print(f"\n[{role.upper()}]\n{content}\n")
       except Exception:
           print(str(msg))


def show_state_history(cfg: Dict[str, Any]) -> List[Any]:
   """Print a concise view of checkpoints; return the list as well."""
   history = list(graph.get_state_history(cfg))
   print("\n=== 📜 State history (most recent first) ===")
   for i, st in enumerate(history):
       n = st.next
       n_txt = f"{n}" if n else "()"
       print(f"{i:02d}) NumMessages={len(st.values.get('messages', []))}  Next={n_txt}")
   print("=== End history ===\n")
   return history


def pick_checkpoint_by_next(history: List[Any], node_name: str = "tools") -> Optional[Any]:
   """Pick the first checkpoint whose `next` includes a given node (e.g., 'tools')."""
   for st in history:
       nxt = tuple(st.next) if st.next else tuple()
       if node_name in nxt:
           return st
   return None

We add utility functions to make our LangGraph workflow easier to inspect and control. We use print_last_message to neatly display the most recent response, show_state_history to list all saved checkpoints, and pick_checkpoint_by_next to locate a checkpoint where the graph is about to run a specific node, such as the tools step. Check out the FULL CODES here.

config = {"configurable": {"thread_id": "demo-thread-1"}}


first_turn = {
   "messages": [
       {"role": "system", "content": SYSTEM_INSTRUCTIONS},
       {"role": "user", "content": "I'm learning LangGraph. Could you do some research on it for me?"},
   ]
}


print("\n==================== 🟢 STEP 1: First user turn ====================")
events = graph.stream(first_turn, config, stream_mode="values")
for ev in events:
   print_last_message(ev)


second_turn = {
   "messages": [
       {"role": "user", "content": "Ya. Maybe I'll build an agent with it!"}
   ]
}


print("\n==================== 🟢 STEP 2: Second user turn ====================")
events = graph.stream(second_turn, config, stream_mode="values")
for ev in events:
   print_last_message(ev)

We simulate two user interactions in the same thread by streaming events through the graph. We first provide system instructions and ask the assistant to research LangGraph, then follow up with a second user message about building an autonomous agent. Each step is checkpointed, allowing us to replay or resume from these states later. Check out the FULL CODES here.

print("\n==================== 🔁 REPLAY: Full state history ====================")
history = show_state_history(config)


to_replay = pick_checkpoint_by_next(history, node_name="tools")
if to_replay is None:
   to_replay = history[min(2, len(history) - 1)]


print("Chosen checkpoint to resume from:")
print("  Next:", to_replay.next)
print("  Config:", to_replay.config)


print("\n==================== ⏪ RESUME from chosen checkpoint ====================")
for ev in graph.stream(None, to_replay.config, stream_mode="vals"):
   print_last_message(ev)


MANUAL_INDEX = None 
if MANUAL_INDEX is not None and 0 <= MANUAL_INDEX < len(history):
   chosen = history[MANUAL_INDEX]
   print(f"\n==================== 🧭 MANUAL RESUME @ index {MANUAL_INDEX} ====================")
   print("Next:", chosen.next)
   print("Config:", chosen.config)
   for ev in graph.stream(None, chosen.config, stream_mode="values"):
       print_last_message(ev)


print("\n✅ Done. You added steps, replayed history, and resumed from a prior checkpoint.")

We replay the full checkpoint history to see how our conversation evolves across steps and identify a useful point to resume. We then “time travel” by restarting from a selected checkpoint, and optionally from any manual index, so we continue the dialogue exactly from that saved state.

In conclusion, we have gained a clearer picture of how LangGraph’s checkpointing and time-travel capabilities bring flexibility and transparency to conversation management. By stepping through multiple user turns, replaying state history, and resuming from earlier points, we can experience firsthand the power of this framework in building reliable research agents or autonomous assistants. We recognize that this workflow is not just a demo, but a foundation that we can extend into more complex applications, where reproducibility and traceability are as important as the answers themselves.


Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Researchers make AI-powered tool to detect plant diseases

Published

on


A team of researchers at Maharshi Dayanand University (MDU), Rohtak, has developed an artificial intelligence (AI)-based tool capable of detecting diseases and nutrient deficiencies in bitter gourd leaves, potentially transforming the way farmers monitor crop health.

The study, recently published in the peer-reviewed journal ‘Current Plant Biology’ (Elsevier), highlights how AI-driven innovations can play a crucial role in real-time crop monitoring and precision farming.

The newly developed web-based application, named ‘AgriCure’, is powered by a layered augmentation-enhanced deep learning model. It allows farmers to diagnose crop health by simply uploading or capturing a photograph of a leaf using a smartphone.

“Unlike traditional methods, which are time-consuming and often require expert intervention, AgriCure instantly analyses the image to determine whether the plant is suffering from a disease or nutrient deficiency, and then offers corrective suggestions,” explained the researchers.

The collaborative research project was led by Dr Kamaldeep Joshi, Dr Rainu Nandal and Dr Yogesh Kumar, along with students Sumit Kumar and Varun Kumar from MDU’s University Institute of Engineering and Technology (UIET). It also involved Prof Narendra Tuteja from the International Centre for Genetic Engineering and Biotechnology (ICGEB), New Delhi and Prof Ritu Gill and Prof Sarvajeet Singh Gill from MDU’s Centre for Biotechnology.

MDU Vice-Chancellor, Prof Rajbir Singh, congratulated the research team on their achievement.

According to the researchers, AgriCure can detect major diseases such as downy mildew, leaf spot, and jassid infestation, as well as key nutrient deficiencies like nitrogen, potassium and magnesium.

“This represents a step towards sustainable agriculture, where AI empowers farmers with real-time decision-making tools,” said corresponding authors Prof Ritu Gill and Prof Sarvajeet Singh Gill. They added that the web-based platform can be integrated with mobile devices for direct use in the field.

The team believes that the technology’s core framework can be extended to other crops such as cereals, legumes, and fruits, creating opportunities for wider applications across Indian agriculture.

Looking ahead, they plan to integrate AgriCure with drones and Internet of Things (IoT) devices for large-scale monitoring, and to develop lighter versions of the model for full offline use on mobile phones.





Source link

Continue Reading

AI Research

Competition to introduce artificial intelligence (AI) is fierce not only in industrial areas but als..

Published

on


Competition to introduce AI to the diplomatic front lines of major countries The U.S. actively utilizes the State Department’s exclusive “State Chat” to brainstorm foreign policy. Canada uses it to analyze major countries’ policies

[Photo = Yonhap News]

Competition to introduce artificial intelligence (AI) is fierce not only in industrial areas but also in diplomacy, which is the front line of competition between countries. The U.S. State Department is increasing the work efficiency of diplomats through its own AI. Japan spends more than 600 billion won a year to detect false information. The move is aimed at preventing the possibility that fake information will be misused to establish national diplomatic strategies.

In the United States, the State Department has been operating its own AI ‘State Chat’ since last year. It is an interactive AI in the form of ‘Chat GPT’, similar to the method promoted by the Korean Ministry of Foreign Affairs. It provides functions such as summarizing internal business documents and professional analysis. E-mails used by diplomats are also drafted according to the format and even have the function of helping “brainstorming” in relation to foreign policy or strategy.

StateChat is dramatically reducing the amount of time State Department employees spend on mechanical tasks. According to State Department estimates, the total amount of time saved by all employees through their own AI amounts to 20,000 to 30,000 hours per week.

The State Department plans to continue expanding the use of StateChat. State Chat is also used for job training. This is due to the advantage of minimizing information that may be omitted during the handover process and enabling in-depth learning by providing data containing stories. State Chat will also be used to manage manpower. Information related to personnel management is also entered in State Chat.

[Photo = Yonhap News]
[Photo = Yonhap News]

Japan has been building a situation analysis system using AI since 2022. AI finally judges the situation by combining reports from local diplomats with external information such as foreign social network service (SNS) posts, reports from research institutes, and media reports. For example, if social media analysis detects residents’ disturbance in a specific area, AI warns of the risk of terrorism or riots.

From 2023, it is using AI to detect fake news that is mainly spread through SNS. It analyzes not only text but also various media types of content such as images, audio, and video. It is a method of measuring the consistency of information based on a large language model (LLM) and then determining whether it is false. In particular, Japan calculates and presents the social impact, such as the scale and influence of the fake news.

Japan believes that numerous fake news after the Fukushima nuclear power plant accident has undermined national trust and caused unnecessary diplomatic friction. Japan allocated about 66.2 billion yen (626.5 billion won) in the fiscal 2025 budget to the policy and technology sectors to respond to false information.

Canada introduced a ‘briefing note’ using Generative AI in 2022. A draft policy briefing document is created by analyzing and reviewing policy-related data of major countries. Finland operates a system that collects diplomatic documents through AI and summarizes them on its own, and even visualization functions are provided. The UK has introduced AI to consular services. Classify the services frequently requested by their citizens staying abroad to overseas missions and provide optimal answers.

Last year, France developed an AI tool that summarizes and analyzes diplomatic documents and external data and is using it to detect ‘reverse information (fake news or false information)’ overseas and to identify public opinion trends. The United Arab Emirates (UAE) has introduced an unmanned overseas mission model that provides consular services based on AI.



Source link

Continue Reading

AI Research

How artificial intelligence is transforming hospitals

Published

on


Story highlights

AI is changing healthcare. From faster X-ray reports to early warnings for sepsis, new tools are helping doctors diagnose quicker and more accurately. What the future holds for ethical and safe use of AI in hospitals is worth watching. Know more below.



Source link

Continue Reading

Trending