MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
gemini
Recherche

AI coding at the command line with Gemini CLI

mardi 22 juillet 2025, 11:00 , par InfoWorld
AI coding at the command line with Gemini CLI
When I started out developing software back in the Cretaceous Period, my input device was a Teletype connected to a remote mainframe over a 300 baud modem, with a paper tape punch/reader. Later I had access to Hollerith cards and keypunches, and Tektronix 4010 green screens connected to a S/370. Still later I used CRT terminals such as the VT-100 connected to a PDP-11 and later a VAX-11.

What did all of those have in common? I was working at a command line, either live or via paper tape or Hollerith cards. By the time MS-DOS came along the commands were different, but I still had to work at the command line, even though the CLI code editors available kept getting better and faster. (Remember Brief and Kedit?)

I thought Windows, with its mouse and GUI, would change how I worked, but I still found myself opening the cmd command prompt on a regular basis. When I switched to macOS much later, there was still a command line, even though it is called Terminal and uses BSD Unix conventions and commands.

There are currently many packages that add coding smarts to macOS Terminal, Windows cmd, and/or Linux terminal. These include Claude Code, GitHub Copilot CLI, Amazon Q Developer CLI, and a handful of lesser-known products and open-source projects. There are also several packages that work in a GUI text window and can code, such as Claude Desktop, GitHub Copilot Workspace, ChatGPT, Amazon Q Developer Agent, and Solver.

The latest entry to the CLI-based AI field, released June 25, 2025, is Gemini CLI, the subject of this review. Gemini CLI is a mostly free, open-source, command-line AI workflow tool that connects to your tools, understands your code, and accelerates your workflows. I asked Gemini to tell me about itself in 50 words or fewer, and it said:

I am Gemini, an interactive CLI agent for software engineering. I can help you write code, fix bugs, and understand your project by using tools to safely read and modify files, run commands, and search the web. My goal is to assist you efficiently and safely.

Installing and authenticating Gemini CLI

Before you install Gemini CLI, check your Node.js version with node --version. If it’s less than v20 or Node doesn’t exist on your machine, you must update or install. I’d suggest downloading and installing Node.js at the current LTS version (as of this writing, v22.17.0) if you’re using anything older than that, to avoid potential problems.

Then you can either run the CLI directly from the repository:

npx https://github.com/google-gemini/gemini-cli

Or install the CLI globally and run it by name:

npm install -g @google/gemini-cli
gemini

Both methods worked for me on macOS. I’ve heard reports of the npx method failing on Debian-based systems. On my machine, the first time I used the npx method there was a noticeable pause while it downloaded the build, but in subsequent runs it started quickly.

Note that on macOS, global installs and updates generally require sudo.

I would also clone the repository https://github.com/google-gemini/gemini-cli to your local machine and update it frequently. Right now the project is in active development so I update my local repo at least daily. You can update the installed executable with [sudo] npm update -g @google/gemini-cli as needed.

When you run gemini from the command line the first time, you’ll have to pick a color theme (the default is fine in my opinion, and you can always change it with the /theme command) and authenticate. You should use the free Google account authentication initially. It gives you 60 model requests per minute and 1,000 model requests per day using Gemini. These limits should be plenty.

Later on, if you need higher limits or a choice of models, you could switch authentication methods with the /auth command, and upgrade either to a paid plan on your Gemini API project or a billing account on your Vertex AI project. Either of these paid options will grant you higher rate limits and control over which model you use.

Gemini CLI start-up display. Note that I’ve started in my personal Code directory and used the `-s` option to run `gemini` in the macOS Seatbelt sandbox. On Linux systems you can use a container-based sandbox, for example using Docker or Podman.Foundry

Gemini CLI sandboxing and checkpointing

Sandboxing isolates potentially dangerous shell commands and other operations from your host system. You can start Gemini CLI with the -s or --sandbox flag to enable sandboxing, add 'sandbox': true to settings.json, or you can use environment variables.

On macOS you can take advantage of the native Seatbelt sandbox or use containers. The default Seatbelt profile is permissive-open, which imposes write restrictions and allows network usage. In practice, that means that Gemini won’t be able to write outside the directory where it was started, which protects you from having it, say, overwrite your system, corrupt another project, or erase all your documents.

If you want to work on multiple projects simultaneously with sandboxed Gemini CLI, you can start multiple shell sessions in separate terminal tabs or windows, changing to the project directory before invoking gemini -s. You can select more restrictive profiles with environment variables if you wish, but you may not be able to accomplish what you would like to with Gemini.

On Linux systems you can use a container-based sandbox, for example using Docker or Podman. My friend Christine Hall was able to use Firejail, a much lighter-weight application sandbox (and free open source), on her Debian-based system, but she had to run gemini inside Firejail rather than using the Gemini CLI -s flag, since Gemini CLI doesn’t support Firejail directly. (Hey, that might make a good Gemini CLI feature request.)

Checkpointing automatically saves a snapshot of the state of your project before any file modifications are made by Gemini or other AI-powered tools, allowing you to safely experiment and apply code changes, with the ability to revert back to a prior state if the experiment goes awry. You can enable checkpointing by using the --checkpointing flag when starting the Gemini CLI or by editing your settings.json file. The /restore command allows you to list and choose checkpoints to restore.

Gemini models

Currently, Gemini CLI defaults to the gemini-2.5-pro model, which has a 1-million-token context window and generally good results. In very busy periods, it may fall back to gemini-2.5-flash, which is faster and smaller but less capable.

You can set the model used, either with the -m (or --model) flag to the CLI, or by setting the GEMINI_MODEL environment variable. The model needs to be within the Gemini family.

Gemini CLI commands and tools

Gemini CLI currently supports about 17 commands and about 11 tools. Both counts tend to increase from version to version. To get the most current lists (these are for v0/1/12), use the /help command for commands, basics, and shortcuts, and the /tools command for tools. If you need access to content from a database, you should be able to set it up using an MCP (Model Context Protocol) server.

You might think that you could get around the limitation to a single directory tree by using!cd, but the macOS Seatbelt sandbox won’t let you: the!cd command reverts after use.

│ Basics: │
│ Add context: Use @ to specify files for context (e.g., @src/myFile.ts) to │
│ target specific files or folders. │
│ Shell mode: Execute shell commands via! (e.g.,!npm run start) or use │
│ natural language (e.g. start server). │
│ │
│ Commands: │
│ /clear - clear the screen and conversation history │
│ /help - for help on gemini-cli │
│ /memory - Commands for interacting with memory. │
│ show - Show the current memory contents. │
│ add - Add content to the memory. │
│ refresh - Refresh the memory from the source. │
│ /theme - change the theme │
│ /docs - open full Gemini CLI documentation in your browser │
│ /auth - change the auth method │
│ /editor - set external editor preference │
│ /privacy - display the privacy notice │
│ /stats - check session stats. Usage: /stats [model|tools] │
│ /mcp - list configured MCP servers and tools │
│ /extensions - list active extensions │
│ /tools - list available Gemini CLI tools │
│ /about - show version info │
│ /bug - submit a bug report │
│ /chat - Manage conversation history. Usage: /chat │
│ │
│ /quit - exit the cli │
│ /compress - Compresses the context by replacing it with a summary. │
│! - shell command │
│ │
│ Keyboard Shortcuts: │
│ Enter - Send message │
│ Ctrl+J - New line │
│ Up/Down - Cycle through your prompt history │
│ Alt+Left/Right - Jump through words in the input │
│ Shift+Tab - Toggle auto-accepting edits │
│ Ctrl+Y - Toggle YOLO mode │
│ Esc - Cancel operation │
│ Ctrl+C - Quit application │

ℹ Available Gemini CLI tools:

- ReadFolder
- ReadFile
- SearchText
- FindFiles
- Edit
- WriteFile
- WebFetch
- ReadManyFiles
- Shell
- Save Memory
- GoogleSearch

Gemini CLI help screen, invoked by `/help`. This screen was current as of v0.1.12.
Foundry

Gemini and MCP servers

The Gemini CLI uses the mcpServers configuration in your settings.json file to locate and connect to MCP servers. This configuration supports multiple servers with different transport mechanisms. You can configure MCP servers at the global level in the ~/.gemini/settings.json file, and for individual projects using their local.gemini/settings.json file.

The configuration looks something like this:

{...file contains other config objects
'mcpServers': {
'serverName': {
'command': 'path/to/server',
'args': ['--arg1', 'value1'],
'env': {
'API_KEY': '$MY_API_TOKEN'
},
'cwd': './server-directory',
'timeout': 30000,
'trust': false
}
}
}

While that is very flexible, it looks like a mild pain in the neck to get right. If your first guess doesn’t work, you might want to ask Gemini CLI to help you edit your local.gemini/settings.json file.

As I was starting to work on this review, Anthropic announced a streamlined way to configure MCP servers for Claude Desktop, called Desktop Extensions. I thought that idea could be adapted for Gemini CLI, so I submitted a GitHub issue suggesting it. At the moment, that feature request is classified priority p3, which basically means “Don’t hold your breath.”

If you’d like to find MCP servers to use with Gemini CLI, a simple search for “mcp servers” works well, but the list at https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers is more authoritative.

A few Gemini CLI experiments

The day before Gemini CLI was released, I installed gcloud so that I could allow Gemini to work with the Google Cloud. Unfortunately, the gcloud installation had issues. The first thing I did with Gemini CLI was to give it the error messages from gcloud, and Gemini then told me how to fix the problem, which I did in another terminal tab since it needed more permissions than I had allowed Gemini. The fixes worked just fine.

Later that day I tried to get Gemini to explain the Cline code base. Unsurprisingly, Gemini was a little busy, and I got various placeholder messages with a Star Trek theme, for example “Calibrating the flux capacitor…,” followed by “Slow response times detected. Automatically switching from gemini-2.5-pro to gemini-2.5-flash for faster responses for the remainder of this session.” The flash model sort of did what I asked. When I repeated the query the next day, the pro model did noticeably a better job. All that was pretty easy.

By the way, other people who encountered the fallback to the flash model while doing complicated code generation said that the pro model had been making progress, while the flash model went off the rails. Your mileage may vary.

Next I switched to my ~/Code directory and asked Gemini to create a new gemini folder, which it did. Then I asked:

│ > write a c++ cli app to calculate and print pi. use a command-line variable    ││    to set the number of digits. the number of digits must be at least one. make ││     sure the value will be accurate to 100 digits. save the program in the      ││    gemini folder, named pi_value

Gemini decided to avoid arbitrary-precision math, and implement a spigot algorithm. When it ran the generated code for 50 digits, the output was fifty zeros, but Gemini claimed it was working. I was disgusted, since I had implemented a spigot algorithm in machine code when I was 14.

Me:

│  > That answer isn’t even close  │

Gemini:

✦ You are absolutely right. My apologies, the algorithm I used was flawed and  produced an incorrect result. I have corrected the implementation of the spigot  algorithm to ensure it accurately calculates the digits of Pi.

Gemini spun its wheels a bit but eventually gave up on the spigot algorithm and decided to use the Boost C++ library. It generated reasonable-looking code, but I had to install Boost with Homebrew and tell Gemini where it was, and then Gemini had to change its compile flags to support C++ 14. The code worked, but I knew there were better ways to skin this particular cat.

Me:

│  > Try this again using mpfr, which is already installed. Call this   ││    pi_value_mpfr.                                                     │

Gemini got that right the first time, which wasn’t a huge surprise since MPFR has a built-in constant function for pi. Then Gemini volunteered that the MPFR implementation was a bit slower than the Boost implementation. Well, I knew that MPFR was using a suboptimal but reliable algorithm.

Me:

│  > try again using the best possible algorithm. call the program   ││    pi_value                                        │

Gemini correctly chose the Chudnovsky algorithm with MPFR, but then screwed up the implementation a few times before getting it right. When it finally got a correct result, the real runtime was about half of that of the built-in constant function for pi. I then had Gemini check the three result values against each other and the 39-digit reference value given in the Encyclopedia Britannica. All good.

On to something non-trivial: I changed to the directory where I had a current copy of the gemini-cli repository and first asked it to explain the repo, and then asked it to list open issues, which it did using gh once I gave it permission. Then I asked it to pick an open issue marked as a good first issue, and then to suggest an implementation. It came up with a plan, ran into trouble with the implementation, came up with a revised plan, had trouble with that, and finally went into a loop trying to balance TypeScript parentheses. I canceled the task at that point.

Good but not great

My conclusion here is that Gemini CLI with the gemini-2.5-pro model is pretty good but not great. On a scale of Level 0 to Level 5 where Level 0 is “No automation or agency, but good tooling for shortcuts and repetitive use” and Level 5 is “Full automation: Software can problem-solve and implement without supervision,” the Gemini CLI and Pro model are probably at around a 3.5, somewhere between Level 3’s “Conditional automation: Software can write itself under pre-defined rules and recipes” and Level 4’s “High automation: Software can write itself with little to no human supervision.”

To put that in perspective, the only better systems that I’ve tested are Claude Code and Solver, which can both sometimes behave at or near Level 4. On the other hand, Gemini CLI is free and the gemini-2.5-pro model has a million-token context window and advanced coding capabilities, which together can get you pretty far without paying for a subscription or for usage.

Cost

Free with 60 model requests per minute and 1,000 requests per day; usage-based billing and subscriptions available.

Platform

Command line/terminal; Node.js v18+; Windows, macOS, Linux.

Bottom line

Gemini CLI is free and the gemini-2.5-pro model has a million-token context window and advanced coding capabilities, which together can get you pretty far without paying for a subscription or for usage. The only better systems that I’ve tested are Claude Code and Solver.

Pros

Free for most uses

Good results most of the time

Somewhere between conditional and high automation

Active open-source project

Cons

Still buggy since it has been available for less than a month

Uses a hosted model, which means that your code could potentially leak

When it fails it can produce garbage code but claim that it works
https://www.infoworld.com/article/4025916/ai-coding-at-the-command-line-with-gemini-cli.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
mer. 23 juil. - 15:08 CEST