AI14: How to build a productivity AI agent in 24 lines of code
If you have tried figuring out AI automation or agent builders, I'm confident you can build this.
Building an AI agent can seem pretty daunting.
Coding is already scary to many people. Now, we have to add AI to it?
But actually, building an AI agent that can handle simple tasks is relatively easy nowadays.
Even if you do not know how to code, I believe you can create the AI agent I’m about to show you.
It is just 24 lines of code.
And using this AI agent is free. You can run it from your computer. There’s no need to pay for any AI agent builder.
Once you have succeeded, I hope you will be less afraid of and more excited to try building other AI agents.
Agent = LLM + Tools
When I was building my first AI assistant, I intentionally overcomplicated things and created a multi-agent system with retrieval-augmented generation (RAG).
But over the past few months, I learned that giving a powerful LLM useful tools is sufficient to create an AI agent that can do multi-step tasks.
One big unlock is that models like Claude 3.5 Sonnet are intelligent enough to come up with a plan to complete a task, including using tools to research, get data, and send information. Essentially, we are telling the LLM, “Hey, help me do this task. You have these tools to help you.” And the LLM will figure things out itself and even find workarounds when it is stuck.
A big part of what’s left is giving LLMs the tools to use. LLMs on their own cannot search the Internet, use apps, or do things on our computer unless they are given tools to do those things. There are several options for this:
Create your own tools (essentially functions) by coding
Use MCP servers
Use products (e.g. Composio) or frameworks (e.g. LangChain) that provide tools
But they aren’t as simple and open source as we like.
So, this week, SK and I released Stores, a little open-source project to make giving LLMs tools super simple.
Stores is a simple Python library that lets you give LLM tools in as few as three lines of code, instead of going through complicated hoops. It also helps simplify tedious tasks such formatting tools for different LLM providers, handling credentials, and executing tool calls. I created several template scripts, which you can copy and use (more below).
We have also been creating open-source tools, such as Notion, Slack, and Gmail, so that you don’t have to build these tools yourself. Our hope is other developers will find this approach valuable and want to contribute their tools, creating a comprehensive arsenal of tools for everyone to use.
Thanks to powerful LLMs and easy-to-use tools, it has become easier than ever to code your own AI agents.
Let’s do it together!
An AI agent that can look at the tasks in my Todoist, research, email me the results, and close the tasks
The specific task I need help with is “Email alfredlua@gmail.com the top 3 HN posts”.
Creating this AI agent takes only four simple steps.
To show you how simple it is, here is the entire script for this AI agent. There are just 24 lines of code, excluding empty lines and comments. I’ll explain what the code does below.
import os
from dotenv import load_dotenv
from llama_index.core.agent import AgentRunner
from llama_index.core.tools import FunctionTool
from llama_index.llms.google_genai import GoogleGenAI
import stores
# Load environment variables
load_dotenv()
# Load tools and set the required environment variables
index = stores.Index(
["silanthro/todoist", "silanthro/hackernews", "silanthro/send-gmail"],
env_var={
"silanthro/todoist": {
"TODOIST_API_TOKEN": os.environ["TODOIST_API_TOKEN"],
},
"silanthro/send-gmail": {
"GMAIL_ADDRESS": os.environ["GMAIL_ADDRESS"],
"GMAIL_PASSWORD": os.environ["GMAIL_PASSWORD"],
},
},
)
# Wrap tools with LlamaIndex FunctionTool
tools = [FunctionTool.from_defaults(fn=fn) for fn in index.tools]
# Initialize the LlamaIndex agent with tools
llm = GoogleGenAI(model="models/gemini-2.0-flash-001")
agent = AgentRunner.from_llm(tools, llm=llm, verbose=True)
# Get the response from the LlamaIndex agent. The LlamaIndex agent will
# automatically execute the tool call.
response = agent.chat("Help me complete my tasks for today.")
print(f"Assistant response: {response}")
And no fancy prompting is required. We are going to use this:
Help me complete my tasks for today
Our AI agent will then figure out what to do and then do them.
Step 0: Set up credentials
To allow our AI agents to use Todoist and send emails via Gmail, we need to provide some credentials. Generally, allowing AI agents to do things with our accounts via APIs is more secure than via the accounts directly because there is a limited number of things that can be done via APIs.
Copy the following into a .env
file and update the credentials accordingly. This file and your credentials should be stored only on your computer and will be accessed by the AI agent when required to do things for you. Do not share this file anywhere!
I’m using Google Gemini 2.0 Flash because it’s free and good enough for this task. You can use other models if you prefer.
TODOIST_API_TOKEN=<Get this from your account https://www.todoist.com/help/articles/find-your-api-token-Jpzx9IIlB>
GMAIL_ADDRESS=<Your Gmail address>
GMAIL_PASSWORD=<Your Gmail app password, not regular password https://myaccount.google.com/apppasswords>
GOOGLE_API_KEY=<Your Google Gemini API key https://aistudio.google.com/apikey>
Step 1: Load tools
I want my AI agent to look at the tasks in my Todoist, research Hacker News (our demo task here), email me the results, and close the task. To achieve this, I’d need to give an LLM the relevant tools. This is where Stores comes in.
We have already built these tools for you. All you have to do is to list the tools and credentials.
Let’s create a complete-task.py
file:
import os
from dotenv import load_dotenv
import stores
# Load environment variables (ie. credentials)
load_dotenv()
# Load tools and set the required environment variables
index = stores.Index(
["silanthro/todoist", "silanthro/hackernews", "silanthro/send-gmail"],
env_var={
"silanthro/todoist": {
"TODOIST_API_TOKEN": os.environ["TODOIST_API_TOKEN"],
},
"silanthro/send-gmail": {
"GMAIL_ADDRESS": os.environ["GMAIL_ADDRESS"],
"GMAIL_PASSWORD": os.environ["GMAIL_PASSWORD"],
},
},
)
Step 2: Create the AI agent
Then, we need to tell our LLM that it has these tools so that it will ask for them when necessary.
To make things simpler, I’m using a framework called LlamaIndex. It provides an easy way to create a ReAct agent, which does two things for us:
The LLM will think about what it needs to do, take an action, observe the output, then think about the next step. I could tell the LLM to do this via the system prompt but LlamaIndex does that for me.
Usually, I would have to take the LLM response, execute the tool call, put the result into a chat history, then get the LLM to generate another response. Again, I could write all these code myself (some examples here) but LlamaIndex does all that for me.
In the same complete-task.py
file, add:
from llama_index.core.agent import AgentRunner
from llama_index.core.tools import FunctionTool
from llama_index.llms.google_genai import GoogleGenAI
# Set the LLM to use
llm = GoogleGenAI(model="models/gemini-2.0-flash-001")
# Set the tools (To use LlamaIndex, we have to wrap our tools with its `FunctionTool` wrapper, which isn't required by most other providers)
tools = [FunctionTool.from_defaults(fn=fn) for fn in index.tools]
# Create the agent with the LLM and tools
agent = AgentRunner.from_llm(tools, llm=llm, verbose=True)
If you are hitting the default limit for iterations (“ValueError: Reached max iterations.”), you can increase the limit by adding max_iterations
:
agent = AgentRunner.from_llm(tools, llm=llm, verbose=True, max_iterations=50)
Step 3: Run the agent
Finally, we only need one line of code to get the agent going. This line will tell the agent our task, and the agent will work on it until it’s completed.
Again, in the same complete-task.py
file, add:
response = agent.chat("Help me complete my tasks for today.")
Our AI agent script is ready!
To activate the agent, open up Terminal, navigate to the folder where you have this script1, and enter:
python complete-task.py
In Terminal, you will be able to see the AI agent’s thoughts and actions:
It used the Todoist tool to get today’s task (ie. Email alfredlua@gmail.com the top 3 HN posts)
It then used the Hacker News tool to get the top three posts.
It then used the Gmail tool to send an email with the posts and their links.
Finally, it used the Todoist tool again to close the task.
This was the result I got:
Those were the top three posts on Hacker News when I ran the script:
The task in Todoist was closed after the email was sent:
If you run into any issues, please let me know!
Other tasks this AI agent can do
The above is just a simple example of what the AI agent can do.
But because our code doesn’t force the AI agent to follow a fixed, rigid workflow and allows it to figure things out itself, it can do a lot more things.
For example, with the tools we have built, here are some other automations we can create:
Curate useful links from Hacker News and store them in Notion
Slack-message me a summary of the latest meeting notes in Notion
Email my teammates a summary of our latest meeting notes, with a link to the meeting notes in Notion
Add to-dos in a Notion page to Todoist
Email or Slack-message me a daily brief of top Hacker News posts, today’s tasks from Todoist, and summaries of meeting notes in Notion
Rename files on my computer based on their content
Add the content of all files in a specific folder on my computer to Notion
What other tasks would you like to do?
Or what other tools would you want to use?
Let me know!
Jargon explained
(I realized this section has slowly turned from jargons to jargons and notes from my recent technical mistakes. But the purpose remains the same, to make technical stuff more understandable!)
Tailwind Typography <pre> is not responsive: I encountered this issue while building Stores’ docs. When the code block is not responsive, the remaining text of the article is also not responsive. I initially used the
grid sm:grid-cols-1
layout workaround in the linked Github discussion to resolve the issue. But it turns out what I needed to do was to addoverflow-hidden
to the article parent, which seems to force the code block to respect the parent’s width and be responsive.
To be honest, I don’t fully understand why this worked yet and ChatGPT wasn’t helpful enough. If you know why, please teach me!
PEP 508 identifier rules: Identifiers (ie. package names) in
pyproject.toml
files cannot have/
, such as “silanthro/slack”.
Interesting links
AI interfaces are still mostly chat, which is like how we used computers before we had graphic user interface (GUI). What is the GUI for AI?
The case against conversational interfaces by
AI adoption is a UX problem by @nan yu
Rethinking how we Scaffold AI Agents - This has a cool demo of an LLM-generated interface at the end:
There is MCP, and now there’s A2A. Google recently announced an open protocol (essentially a public set of rules) for agents to talk to one another. As with MCP, the key is whether developers adopt this new standard.
Not all AI-assisted programming is vibe coding (but vibe coding rocks): Or how to vibe code better.
’s golden rule is something I’m trying to follow after commiting some bad code I didn’t understand.
My golden rule for production-quality AI-assisted programming is that I won't commit any code to my repository if I couldn't explain exactly what it does to somebody else.
Model Context Protocol has prompt injection security problems: MCP is cool and all but be careful using them.
How are people really using AI?
scoured Reddit for comments on how people have been using generative AI:
Recent issues
To navigate to a folder on a Mac, you can simply type “cd ” and drag and drop the folder onto Terminal, which will add the path to the folder for you.