đ¤ Agentic Coding¶
See also: the list of agentic coding tools.
đˇ LBNL team: Some agentic coding tools have been tested with CBORG, including the Roo Code plugin for VSCode. See more details here.
Agent Clients and Tools¶
Goose¶
(These instructions are adapted from those developed as part of the EGSB Datafest at LBNL in September 2025.)
Goose is an open source AI agent for automating coding tasks. It supports a variety of LLMs and can be used through a desktop app or the command line.
The official quick-start guide is here.
Install Goose¶
https://block.github.io/goose/docs/getting-started/installation/
- Install the desktop version (there is also a CLI version)
- The Desktop version has to be launched from your OS' GUI. Typing
gooseat the CLI only works for the CLI version - If youâre installing Goose on a Mac with brew, it may put the app in your Applications folder kinda silently. (If you just unzip the zip file in your Downloads, the app will go in your Downloads folder and you can move it to Applications.)
Get a key on CBORG¶
đˇ LBNL team: These instructions are for you. If you are not from LBNL, move on to the next section. Otherwise, follow the instructions in this section to get a CBORG API key if you do not have one yet.
Set up Goose¶
- Launch Goose
- Bug alert:Marcin Joachimiak had this problem: https://github.com/block/goose/issues/1734
- and used this fix: delete keychain entries for goose in utilities -> keychain access -> search for âgooseâ
- Choose the âOther providersâ option (or âswitch modelsâ from Settings)
- Choose one of the models offered (OpenAI is a good choice)
- Enter your API key.
- If using CBORG, for API host enter https://api.cborg.lbl.gov. (You can leave OPENAIBASEPATH as it is)
- Settings -> Chat: change Mode from autonomous to manual. (You might also want to change response style from concise to detailed)
- Click âChatâ in the left menubar. This should give you a normal OpenAI chat interface. Test it by asking it a question. If it doesnât work, you may need to âswitch modelsâ and try again.
Set up agents in Goose¶
Choose âExtensionsâ in the left menubar. Sierra and Mark recommend turning OFF all extensions (including the developer extension) to start. This will make Goose ask permission for all tool usage and report its progress for each tool use.
Where to find the Goose config files:
- On Windows, Goose stores its configuration files in C:\Users\USERNAME\AppData\Roaming\Block\goose
- whereas (IIRC) on MacOS and Linux, Goose stores (at least one of its many config files) in \~/.goose/config
An example exercise for setting up an MCP with Goose¶
(This is adapted from an exercise made by Mark Andrew Miller)
This exercise will have you install the fitness-mcp, which requires that you have:
- Python 3.11+
- uv installed
- https://docs.astral.sh/uv/getting-started/installation/
- Use standalone installer from your computer's terminal/console etc
- You may need to open a new shell to use it
- Git installed, if necessary
- https://git-scm.com/downloads
- A GitHub account
Learn about it here: https://github.com/contextualizer-ai/fitness-mcp
Clone the repository and install the MCP locally:
git clone https://github.com/contextualizer-ai/fitness-mcp.gitcd fitness-mcpuv sync
Connecting fitness-mcp to your client (like Goose) is on the complicated side right now.
It will become easier when we load it into PyPI!
Go to setting or extensions depending on your Goose version.
which uv- Copy the output to a notepad
- Go into the fitness-mcp directory in your terminal and
pwd - Copy the output to notepad
- Put that all together with some additional bits, like this (For me MAM, Mark Andrew Miller!):
- /Users/MAM/.local/bin/uv run --directory /Users/MAM/Documents/gitrepos/compare-mcps/fitness-mcp fitness-mcp
- Don't forget to include the second
fitness-mcp - Try pasting that into a shell. You should see a FastMCP splash screen. You can exit with Control-C
- Paste it into a new Goose custom extension
- Try queries like these:
- https://github.com/contextualizer-ai/fitness-mcp/issues/33
- You will be asked for permissions, possibly several times
Claude Code¶
Claude Code is an âagenticâ coding tool that can look through your entire codebase and perform edits on files, write/run tests, GitHub ops, etc. Advantage being you donât need to feed in several prompts with context, etc. to models that we would typically access over the web. It has emerged as an indispensable tool not just for coding, but also for curation and ontology editing.
Claude Code is remarkably good at following declarative instructions like ârefactor all of my unit tests to use mocks instead of API callsâ or âwrite me documentation for this project that is similar in spirit to the existing documentation in this other projectâ. Try it! It works very well.
Essentially Claude Code allows you to change your role from writing software to reviewing software. Instead of spending an hour writing code, you spend a few minutes reviewing and critically assessing code that Claude Code writes in a few seconds.
â ď¸ Warning: DONâT SKIP THIS STEP! You need to review the changes Claude suggests.
Claude Code uses a CLAUDE.md file to define general instructions for its operation in a project. Chris put together an example CLAUDE.md (see as GitHub Gist here).
Examples of Claude Code in action:
- Refactoring in the Aurelian tool: https://github.com/monarch-initiative/aurelian/pull/17
- Fixing erroneous relationships in the Uberon ontology: https://github.com/obophenotype/uberon/pull/3508
Recommended reading: Claude Code Best Practices \ Anthropic
GitHub Copilot¶
GitHub Copilot is an AI-assisted coding agent client built into GitHub and IDEs like VSCode. It can propose inline completions, answer questions about your codebase, and help execute multi-step changes through agentic workflows.
What itâs good for:
- Fast, inline code completions in the editor
- Chat-driven refactors, debugging, and code explanations
- Agent-style tasks that iterate across files and fix errors
Anecdotally, it's not quite as good at independent planning as Claude Code, but Copilot can still be used with Claude models, as well as a variety of other models.
Install or enable Copilot¶
- Create a Copilot subscription or use an eligible free tier (students/teachers/OSS maintainers)
- You may be able to identify yourself to GitHub as a researcher and hence eligible for the free tier
- In VSCode, install the GitHub Copilot extension
- Sign in with your GitHub account and enable Copilot in settings
- Open the agent chat interface (look for the "speech bubble with glitter" icon)
- If necessary, change the Ask option to Agent
Prompt files¶
Prompt files let you store reusable instructions inside your repo so Copilot can follow your conventions:
- Keep them short and specific (style rules, file locations, test commands)
- Treat them like code: review, version-control, and update when conventions change
- You can re-use CLAUDE.md files for this purpose, and vice-versa
Docs: GitHub Copilot documentation
OpenAI Codex¶
đˇ LBNL team: As of January 2026, the Lab has a limited-time trial of an OpenAI enterprise account. This provides access to OpenAI Codex. You should be able to access it by logging in to OpenAI with your lbl.gov email.
OpenAI Codex is an agentic coding system that can work locally in your repo or in the cloud to plan and execute multi-step tasks. Itâs designed for longer-horizon work like refactors, bug fixes, and code review.
Codex CLI (local)¶
Codex CLI runs in your terminal and can read, change, and run code in the current directory:
- Install:
npm i -g @openai/codex - Run:
codex - First run will prompt you to authenticate with ChatGPT or an API key
Docs: Codex CLI
IDE extension and cloud tasks¶
- Codex also has a VSCode extension and a cloud mode for longer tasks
- You can kick off tasks from the web and pull results back into your local repo
- Unlike Claude Code and Copilot, Codex appears on the left side of the IDE by default.
ODK-AI¶
ODK-AI is a Docker container for running Claude Code and other tools with ontologies.
Some familiarity with ODK will be helpful for understanding what ODK-AI can do, but in short, ODK is a collection of tools for building, updating, and otherwise working with ontologies. ODK also defines a set of schemas, configuration parameters, and Makefile-based workflows for common ontology-related tasks. ODK-AI provides the framework for an LLM to use ODK tools.
ODK-AI is still in development but is representative of the kinds of agentic workflows we expect will be necessary to keep ontologies well curated.
Setting up MCPs with Agent Clients¶
Each client has its own procedure for adding MCP functionality. Some, like Gooseâs desktop interface, have a settings menu to walk you through the process. Others may require you to update a configuration file. Some commonalities:
- The MCP setup will require you to specify a command to run. If you have already installed the MCP such that it can be run from the CLI, it may be enough to call it directly (e.g., if itâs named tofu-mcp, the command may simply be tofu-mcp).
- In general, if an MCP is available on Pypi, the command can install it. For example, bioportal-mcp is on Pypi and can be installed with uv, so the MCP command can be uvx bioportal-mcp.
- Remember to specify any necessary environment variables in the MCP configuration.