MLog

A bilingual blog crafted for our own voice

Back to posts
AI Agent Development#AI Agent#Claude Code#TypeScript#Automation Tools#Open Source#ai-auto#github-hot

Building a Claude Code-like Agent from Scratch: An Analysis of the learn-claude-code Open Source Project

Published: Mar 30, 2026Updated: Mar 30, 2026Reading time: 6 min

The shareAI-lab/learn-claude-code repository is a minimalist, TypeScript-based agent harness inspired by Claude Code. Advocating the philosophy that "the model is the agent," it discards complex prompt chains and workflows in favor of pure Bash environment interactions for automation. Having garnered over 42,000 stars on GitHub, this project serves as an excellent reference for developers looking to explore the underlying engineering of AI agents and build lightweight, highly transparent tools.

Published Snapshot

Source: Publish Baseline

Stars

42,815

Forks

6,596

Open Issues

25

Snapshot Time: 03/30/2026, 12:00 AM

Project Overview

In the field of AI development, as the foundational capabilities of Large Language Models (LLMs) continue to leap forward, the developer community is experiencing a paradigm shift from "heavyweight frameworks" to "minimalist engineering". The recently highly-watched open-source project shareAI-lab/learn-claude-code (repository: https://github.com/shareAI-lab/learn-claude-code) on GitHub is a typical representative of this trend. This project is a micro Claude Code-like Agent Harness built from scratch using TypeScript.

The reason it has recently become a trending topic is that it proposes a core philosophy in its README that directly addresses a major pain point: "The Model IS the Agent". The project author explicitly points out that a true agent is a neural network model, not a bloated framework, a complex Prompt Chain, or a drag-and-drop workflow. By returning to the minimalism of "Bash is all you need", the project demonstrates to developers how to let the model directly perceive and manipulate the external world through the most basic command-line environment. This back-to-basics design not only lowers the barrier to understanding the underlying logic of agents but also aligns with the current urgent need of developers for lightweight, highly transparent AI tools.

Core Capabilities and Applicable Boundaries

The core capability of this project lies in providing a minimalist "Agent Harness". It strips away the redundant abstractions found in traditional AI frameworks and directly establishes a communication bridge between the model and the Bash terminal. Its core mechanism is to let the model output instructions, which the harness executes in the terminal and then returns the results (standard output/standard error) intact to the model, forming a closed loop of perception and action. The project documentation cites the 2013 DeepMind DQN playing Atari games case, emphasizing that a single neural network can learn complex operations by receiving raw pixels and scores, drawing an analogy to how current LLMs can complete complex programming tasks by receiving raw terminal outputs.

Target Audience and Scenarios:

  • Suitable for AI researchers and low-level developers to learn and deconstruct how commercial agents like Claude Code operate under the hood.
  • Suitable for geeks and independent developers to quickly build highly customized, lightweight local CLI automation tools.
  • Suitable as a teaching case for computer science courses or corporate internal training to help trainees understand agent engineering from scratch.

Non-applicable Audience and Scenarios:

  • Not suitable for enterprise-level non-technical users who require out-of-the-box solutions with rich Graphical User Interfaces (GUIs).
  • Not suitable for heavy business scenarios that require complex Multi-Agent Orchestration and built-in massive third-party API integrations.
  • Developers lacking system-level security protection experience should not deploy it directly in production environments containing sensitive data.

Perspectives and Inferences

Based on the project's astonishing growth rate of over 42,000 Stars in less than a year, it can be inferred that the AI developer community has developed a certain degree of fatigue towards the current mainstream "heavyweight agent frameworks". In the past few years, the industry tended to use complex engineering code to compensate for the shortcomings of model reasoning capabilities; now, with the explosion of foundational model capabilities, over-designed middleware has instead become a bottleneck limiting the models' potential.

The explosive popularity of this project indicates that "Harness Engineering" may become the next important branch. The future focus of agent development will shift from "how to teach the model to think" to "how to provide the model with the purest and most efficient execution environment". Furthermore, the project emphasizes that "humans are also agents, perceiving the world through senses, reasoning through the brain, and acting through muscles." This biomimetic perspective infers that future AI interactions will increasingly approach the way humans use computers—that is, directly operating the operating system and command line, rather than through restricted API interfaces. This trend will greatly promote the development of a Local-first and Terminal-first AI tool ecosystem.

30-Minute Getting Started Guide

To quickly experience this micro agent harness, developers can follow these steps to complete the first run within 30 minutes:

  1. Environment Preparation: Ensure Node.js (v18 or above recommended) and a package manager (npm or pnpm) are installed locally.
  2. Clone the Repository: Execute the command in the terminal: git clone https://github.com/shareAI-lab/learn-claude-code.git
  3. Install Dependencies: Enter the project directory cd learn-claude-code, and execute npm install to install TypeScript and related dependencies.
  4. Configure Environment Variables: Create a .env file in the project root directory and fill in the required LLM API key (e.g., Anthropic's API Key): ANTHROPIC_API_KEY=your_api_key_here.
  5. Build and Run: Execute npm run build to compile the TypeScript code. Then start the agent via npm run start or by directly running the compiled CLI entry file.
  6. First Interaction: At the startup prompt, enter a simple natural language task, for example: "List all files ending with .ts in the current directory and count their total lines." Observe how the harness translates the task into Bash commands, executes them, and has the model summarize the results.

Risks and Limitations

While enjoying the convenience brought by the minimalist architecture, using this project also comes with risks and limitations that cannot be ignored:

  • Data Privacy and Security Compliance: This is the most fatal risk. The harness directly grants the model permission to execute Bash commands. If it encounters a Prompt Injection attack, or if the model hallucinates, it might execute destructive commands like rm -rf /, or send local sensitive files (such as .ssh keys) over the network. It must be run in a strictly isolated sandbox environment (like a Docker container).
  • Uncontrollable Costs: Because the agent will attempt to autonomously fix errors and re-execute commands when it encounters them, this "perception-action" loop may lead to infinite loops in complex tasks. If a strict upper limit on the number of loops is not set, it will consume a massive amount of API Tokens, resulting in exorbitant billing costs.
  • Maintenance and Production Readiness: As a "0 to 1" educational and experimental project, it lacks the comprehensive logging, breakpoint recovery mechanisms, and fine-grained permission controls required for enterprise-level applications. Using it directly for automated operations in a production environment will face extremely high maintenance risks.
  • Capability Ceiling Limited by the Model: The harness itself does not provide any additional logical error-correction capabilities. The success rate of tasks relies 100% on the coding and reasoning capabilities of the underlying connected LLM.

Evidence Sources