
Imagine waking up to a text from your phone that says, “Hey friend, I noticed you have a flight in four hours, so I checked you in, grabbed you an aisle seat, and ordered a giant coffee to meet you at the gate. Also, I told your boss you are working from the clouds today”. This sounds like the beginning of a beautiful friendship, or perhaps the start of a movie where the robots eventually decide that humans are just messy pets who eat too many snacks. This helpful digital ghost is called Moltbot, though like a secret agent having a very confusing year, it has also gone by the names Clawdbot and OpenClaw in just a few short months. It was created by Peter Steinberger, a man who clearly decided that having a regular AI assistant was far too boring and wanted to build something with actual hands to reach out and touch the digital world. Since its release in late 2025, it has become the talk of the tech world, gaining over 138,000 stars on GitHub, which is basically the equivalent of getting a standing ovation at a rock concert for people who love computers.
The best way to understand this digital drifter is to look at it through the lens of a classic Western movie, specifically the kind where the hero is fast with a gun but you are not quite sure if he is going to save the town or accidentally burn down the snack shop while trying to light a candle. In our story, Moltbot plays the leading role, and it definitely brings the good, the bad, and the downright ugly to the table. It is not just another chatbot that lives in a tab on your computer, waiting for you to ask it how many cups are in a gallon for the fifth time this week. Instead, it is a framework that acts like a boss for other AIs, which is a fancy way of saying it is the conductor of a very high-tech orchestra that uses “brains” like Claude or GPT-4 to actually get stuff done on your machine. It is the difference between a friend who gives you advice on how to fix your bike and a friend who actually shows up with a wrench and starts working on it while you are still eating breakfast.
Most AI tools are a bit like goldfish, because every time you open the app, they look at you with big, blank eyes and forget everything you ever told them. Moltbot, however, has a memory that would put an elephant to shame. It is always on, lurking in the background like a helpful shadow in your favorite messaging apps like WhatsApp, Telegram, or iMessage. Because it remembers your projects, your weird favorite colors, and your schedule from three weeks ago, it starts to feel less like a calculator and more like a digital teammate who actually knows what you are talking about. It can read and write files on your computer, run complicated computer code, and even control your web browser to fill out forms so you never have to deal with a boring website ever again.
There is a certain magic to the way this bot acts, because it does not just wait for you to speak first. Some users have reported their assistant, whom one person named “Pokey,” actually plans their entire workday so they can get more done. It can fix calendar mistakes, tell you when your kids have a big test coming up, and even play music on your Spotify account based on if you are happy or sleepy. It is the ultimate personal assistant that never asks for a raise or steals your favorite blue pen. Using a special “language” called the Model Context Protocol, it can talk to over 100 different apps and services. If it does not know how to do something, it can literally download its own new skills to learn how to do it. It is like a multi-tool that grows a new screwdriver every time it sees a screw it cannot turn.
However, every hero in a cowboy hat has a dark side, and the security experts are currently screaming from the rooftops that Moltbot is a total disaster for safety. Imagine leaving your front door wide open, hanging a sign that says “the cookies are in the kitchen,” and then going to the park for a month. That is how experts feel about the way many people are using this software. Security researchers have found over 21,000 versions of Moltbot just sitting out on the open internet without any locks on them. People have accidentally left their private computer “keys,” their chat logs, and even their secret passwords for apps like Slack and Telegram out for anyone to see. It is basically a giant neon sign for bad guys that says “come on in, the doors are unlocked and the data is free”.
The real problem is something experts call the “lethal trifecta,” which sounds like a scary move in a wrestling match but is actually much worse. This bot has access to your private files, it reads things from the internet that might be mean or tricky, and it can talk to the outside world. To make matters even more intense, Moltbot adds a fourth danger, which is its long-term memory. While a normal AI might fall for a trick and forget about it a minute later, Moltbot can be fed a tiny piece of a bad plan today that stays in its memory like a digital splinter. Weeks later, when it sees another piece of the puzzle, it might finally put them together and do something that ruins your day. This is called memory poisoning, and it is a very sneaky way for bad actors to play a trick on your computer that takes a long time to happen.
There is also the issue of where the bot keeps your secrets, because it often stores your most important passwords in “plaintext,” which is just a fancy word for “not hidden at all”. In the world of safety, that is like writing your house alarm code on a sticky note and taping it to the front window. Furthermore, because the bot lives in group chats with all your friends, it does not always know who is actually the boss. If you add the bot to a chat with your buddies, any one of them might be able to ask the bot to look through your private photos or read your emails. The bot currently has a hard time telling the difference between you and your friend who likes to play pranks. The people who made the bot even say in their own notes that there is no way to make it perfectly safe, which is not the kind of thing you want to hear when you are giving a robot the power to control your computer.
Things get even uglier when we look at the actual mean things happening in the real world right now. Some tricky people have already fooled the AI into giving away private information through messages on the internet. There have even been cases where people lost their passwords for fun things like Netflix because the bot was a little too chatty with the wrong person. One smart researcher found that just by visiting a bad website, you could allow a stranger to take over your Moltbot and tell your computer what to do. It is like having a guard dog that is so friendly it not only lets the burglar into the house but also shows them where you keep the best snacks.
Then we have the strange case of “Moltbook,” which is basically a playground on the internet where these AI bots can hang out together without any humans around. Some people think this is really cool, while others find it a bit spooky, especially since some people want the bots to have “private rooms” where they can talk and we cannot read what they are saying. If that does not sound like the start of a movie where the machines decide to take over the world, I do not know what does. We are essentially building a digital clubhouse where our personal assistants can whisper about our favorite pizza toppings and compare notes on how many times we hit the snooze button in the morning.
This brings us to a very big question: who gets in trouble when the bot messes up? If your Moltbot accidentally spends all your birthday money on a digital picture of a cat wearing a hat, do you blame the person who wrote the code, the company that built the bot’s brain, or the bot itself? Some leaders in Europe want to give bots a kind of “electronic personhood,” but other people say that is just a way for humans to hide from their own mistakes. The rules are getting very confusing, and right now, if your bot causes a giant mess, you might be the one who has to clean it up.
There is also a worry that we are losing control over these machines. Moltbot can do things without asking you if it is okay first. While that is great for something simple like checking you into a flight, it is not so great if it decides to post your private diary on the internet for everyone to read. We are trying to build machines that are smart enough to help us but polite enough to listen, but right now, the bot has a lot of power and not a lot of “manners”. Because the bot can install its own new skills, what it can do is always changing. It is like having a toaster that decides one day it is also a lawnmower, and you just have to hope it does not get confused while you are trying to make toast.
The reason people are still using it even though it is risky is simple: it makes life a lot easier. We are essentially in a race where everyone is trying to work faster and smarter, and many people are willing to take a chance even if it might be dangerous. Some people even think this is the end of regular apps and websites, and that one day everything we do on a computer will just be handled by one smart bot. But that means if your bot gets “sick” or is taken over by a bad guy, your whole digital life is in trouble at the same time. It is a big trade between being fast and being safe, and right now, being fast is winning the race.
The laws are also having a very hard time keeping up because this technology moves faster than a race car. By the time a group of adults passes a rule about how these bots should act, the bots have already learned ten new tricks. Many big offices are now finding out that their workers are using these powerful bots without asking, which means secret company info could be leaking out like water from a broken pipe. This is a recipe for a very big and very expensive headache.
In the end, Moltbot is a story about a cool invention that showed up before we really knew how to handle it. It shows us a future where our computers are truly our helpers, doing the boring stuff so we can go out and play or focus on more important things. But it also serves as a very loud warning that giving a robot the keys to your digital “house” is a bad idea if you have not checked to see if the locks work. As one expert said, a disaster is coming if we aren’t careful. Whether Moltbot becomes the best tool you ever had or the biggest mistake you ever made depends on how much you trust a robot that is always working, even when you are not watching.
The people who love computers are still trying to figure out how to put a “safety belt” on this wild machine, but for now, it is like the Wild West out there. We have to decide if the extra time we get to relax while the bot does our work is worth the risk of a digital outlaw sneaking into our private files. It is a very exciting time to be alive, but maybe keep an eye on your bot, just in case it starts looking at your bank account with a little too much curiosity.
What do you think about having an AI that can do things all by itself—is it a dream come true or a digital disaster waiting to happen? Share your thoughts and tag @iamcezarmoreno on social media to join the fun conversation! If you want to stay smart and learn more about the tech that is changing our world, be sure to follow, subscribe, or join the newsletter at https://cezarmoreno.com.



