AI is not a coworker, it's an exoskeleton (kasava.dev)

91 points by benbeingbin 3 hours ago

103 comments:

by qudat a minute ago

It’s a tool like a linter. It’s a fancy tool, but calling it anything more than a tool is hype

by hintymad an hour ago

In the latest interview with Claude Code's author: https://podcasts.apple.com/us/podcast/lennys-podcast-product..., Boris said that writing code is a solved problem. This brings me to a hypothetical question: what if engineers stop contributing to open source, in which case would AI still be powerful enough to learn the knowledge of software development in the future? Or is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

by fhub 36 minutes ago

He is likely working on a very clean codebase where all the context is already reachable or indexed. There are probably strong feedback loops via tests. Some areas I contribute to have these characteristics, and the experience is very similar to his. But in areas where they don’t exist, writing code isn’t a solved problem until you can restructure the codebase to be more friendly to agents.

Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation.

by therealpygon 27 minutes ago

I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. I also have to agree, I find myself more and more lately laughing about just how much resources we waste creating exactly the same things over and over in software. I don’t mean generally, like languages, I mean specifically. How many trillions of times has a form with username and password fields been designed, developed, had meetings over, tested, debugged, transmitted, processed, only to ultimately be re-written months later?

I wonder what all we might build instead, if all that time could be saved.

by hintymad 21 minutes ago

> I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI.

Yeah, hence my question can only be hypothetical.

> I wonder what all we might build instead, if all that time could be saved

If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.

by biztos 39 minutes ago

Or does the field become plateaued because engineers treat "writing code" as a "solved problem?"

We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets.

by hintymad 36 minutes ago

> we especially need 50,000 people writing poems at Google, we do still need poets.

I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job.

by oxag3n an hour ago

> We're thinking about AI wrong.

And this write up is not an exception.

Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."

So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).

by Galanwe an hour ago

I don't think anyone serious believes this. Replacing developers with a less costly alternative is obviously a very market bullish dream, it has existed since as long as I've worked in the field. First it was supposed to be UML generated code by "architects", then it was supposed to be developers from developing countries, then no-code frameworks, etc.

AI will be a tool, no more no less. Most likely a good one, but there will still need to be people driving it, guiding it, fixing for it, etc.

All these discourses from CEO are just that, stock market pumping, because tech is the most profitable sector, and software engineers are costly, so having investors dream about scale + less costs is good for the stock price.

by oxag3n 42 minutes ago

Ah, don't take me wrong - I don't believe it's possible for LLMs to replace 90% or any number of SWEs with existing technology.

All I'm saying is - why to think what AI is (exoskeleton, co-worker, new life form), when its owners intent is to create SWE replacement?

If your neighbor is building a nuclear reactor in his shed from a pile of smoke detectors, you don't say "think about this as a science experiment" because it's impossible, just call police/NRC because of intent and actions.

by jacquesm an hour ago

Not without some major breakthrough. What's hilarious is that all these developers building the tools are going to be the first to be without jobs. Their kids will be ecstatic: "Tell me again, dad, so, you had this awesome and well paying easy job and you wrecked it? Shut up kid, and tuck in that flap, there is too much wind in our cardboard box."

by metaltyphoon an hour ago

I have a feeling they internally say "not me, I won't be replaced" and just keep moving...

by oxag3n an hour ago

Or they get FY money and fatFIRE.

by datakazkn 26 minutes ago

The exoskeleton framing resonates, especially for repetitive data work. Parts where AI consistently delivers: pattern recognition, format normalization, first-draft generation. Parts where human judgment is still irreplaceable: knowing when the data is wrong, deciding what 'correct' even means in context, and knowing when to stop iterating.

The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.

by Bombthecat 6 minutes ago

And your muscles degrade, a pretty good analogy

by TrianguloY 19 minutes ago

I like this analogy, and in fact in have used it for a totally different reason: why I don't like AI.

Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.

I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.

Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.

by gtCameron 4 minutes ago

We've all been raised in a world where we got to practice the 'art' of programming, and get paid extraordinarily well to do so, because the output of that art was useful for businesses to make more money.

If a programmer with an exoskeleton can produce more output that makes more money for the business, they will continue to be paid well. Those who refuse the exoskeleton because they are in it for the pure art will most likely trend towards earning the types of living that artists and musicians do today. The truly extraordinary will be able to create things that the machines can't and will be in high demand, the other 99% will be pursing an art no one is interested in paying top dollar for.

by finnjohnsen2 an hour ago

I like this. This is an accurate state of AI at this very moment for me. The LLM is (just) a tool which is making me "amplified" for coding and certain tasks.

I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.

by DrewADesign an hour ago

Amplified means more work done by fewer people. It doesn’t need to replace a single entire functional human being to do things like kill the demand for labor in dev, which in turn, will kill salaries.

by finnjohnsen2 an hour ago

I would disagree. Amplified meens me and you get more s** done.

Unless there a limited amount of software we need to produce per year globally to keep everyone happy, then nobody wants more -- and we happen to be at that point right NOW this second.

I think not. We can make more (in less time) and people will get more. This is the mental "glass half full" approach I think. Why not take this mental route instead? We don't know the future anyway.

by kiba an hour ago

Jevon's paradox means this is untrue because it means more work not less.

by inglor_cz 37 minutes ago

Hm. More of what? Functionality, security, performance?

Current software is often buggy because the pressure to ship is just too high. If AI can fix some loose threads within, the overall quality grows.

Personally, I would welcome a massive deployment of AI to root out various zero-days from widespread libraries.

But we may instead get a larger quantity of even more buggy software.

by emp17344 42 minutes ago

This is incorrect. It’s basic economics - technology that boosts productivity results in higher salaries and more jobs.

by gorjusborg 34 minutes ago

Well, that depends on whether the technology requires expertise that is rare and/or hard to acquire.

I'd say that using AI tools effectively to create software systems is in that class currently, but it isn't necessarily always going to be the case.

by cogman10 an hour ago

The more likely outcome is that fewer devs will be hired as fewer devs will be needed to accomplish the same amount of output.

by HPsquared an hour ago

The old shrinking markets aka lump of labour fallacy. It's a bit like dreaming of that mythical day, when all of the work will be done.

by cogman10 40 minutes ago

No it's not that.

Tell me, when was the last time you visited your shoe cobbler? How about your travel agent? Have you chatted with your phone operator recently?

The lump labour fallacy says it's a fallacy that automation reduces the net amount of human labor, importantly, across all industries. It does not say that automation won't eliminate or reduce jobs in specific industries.

It's an argument that jobs lost to automation aren't a big deal because there's always work somewhere else but not necessarily in the job that was automated away.

by slopinthebag 24 minutes ago

When computers came onto the market and could automate a large percentage of office jobs, what happened to the job market for office jobs?

by cogman10 18 minutes ago

They changed, significantly.

We lost the pneumatic tube [1] maintenance crew. Secretarial work nearly went away. A huge number of bookkeepers in the banking industry lost their jobs. The job a typist was eliminated/merged into everyone else's job. The job of a "computer" (someone that does computations) was eliminated.

What we ended up with was primarily a bunch of customer service, marketing, and sales workers.

There was never a "office worker" job. But there were a lot of jobs under the umbrella of "office work" that were fundamentally changed and, crucially, your experience in those fields didn't necessarily translate over to the new jobs created.

[1] https://www.youtube.com/watch?v=qman4N3Waw4

by slopinthebag 16 minutes ago

I expect something like this will happen to some degree, although not to the extent of what happened with computers.

But the point is that we didn't just lose all of those jobs.

by cogman10 10 minutes ago

Right, and my point is that specific jobs, like the job of a dev, were eliminate or significantly curtailed.

New jobs may be waiting for us on the other side of this, but my job, the job of a dev, is specifically under threat with no guarantee that the experience I gained as a dev will translate into a new market.

by m_ke 2 hours ago

It's the new underpaid employee that you're training to replace you.

People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.

If you can record a human doing anything on a computer, we'll soon have a way to automate it

by mylifeandtimes 7 minutes ago

> the new underpaid employee that you're training to replace you.

and who is also compiling a detailed log of your every action (and inaction) into a searchable data store -- which will certainly never, NEVER be used against you

by xyzzy123 an hour ago

Sure, but do you want abundance of software, or scarcity?

The price of having "star trek computers" is that people who work with computers have to adapt to the changes. Seems worth it?

by worldsayshi an hour ago

My only objection here is that technology wont save us unless we also have a voice in how it is used. I don't think personal adaptation is enough for that. We need to adapt our ways to engage with power.

by almostdeadguy 39 minutes ago

Both abundance and scarcity can be bad. If you can't imagine a world where abundance of software is a very bad thing, I'd suggest you have a limited imagination?

by agumonkey an hour ago

It's a strange economical morbid dependency. AI companies promises incredible things but AI agents cannot produce it themselves, they need to eat you slowly first.

by xnx 2 hours ago

Exactly. If there's any opportunity around AI it goes to those who have big troves of custom data (Google Workspace, Office 365, Adobe, Salesforce, etc.) or consultants adding data capture/surveillance of workers (especially high paid ones like engineers, doctors, lawyers).

by Gigachad an hour ago

Data clearly isn't the only issue. LLMs have been trained on orders of magnitude more data than any person has ever seen.

by polotics an hour ago

How much practice have you got on software development with agentic assistance. Which rough edges, surprising failure modes, unexpected strengths and weaknesses, have you already identified?

How much do you wish someone else had done your favorite SOTA LLM's RLHF?

by badgersnake an hour ago

I think we’re past the “if only we had more training data” myth now. There are pretty obviously far more fundamental issues with LLMs than that.

by cesarvarela 2 hours ago

LLMs have a large quantity of chess data and still can't play for shit.

by dwohnitmok an hour ago

Not anymore. This benchmark is for LLM chess ability: https://github.com/lightnesscaster/Chess-LLM-Benchmark?tab=r.... LLMs are graded according to FIDE rules so e.g. two illegal moves in a game leads to an immediate loss.

This benchmark doesn't have the latest models from the last two months, but Gemini 3 (with no tools) is already at 1750 - 1800 FIDE, which is approximately probably around 1900 - 2000 USCF (about USCF expert level). This is enough to beat almost everyone at your local chess club.

by cesarvarela an hour ago

Yeah, but 1800 FIDE players don't make illegal moves, and Gemini does.

by famouswaffles 10 minutes ago

That benchmark methodology isn't great, but regardless, LLMs can be trained to play Chess with a 99.8% legal move rate.

by runarberg an hour ago

Wait, I may be missing something here. These benchmarks are gathered by having models play each other, and the second illegal move forfeits the game. This seems like a flawed method as the models who are more prone to illegal moves are going to bump the ratings of the models who are less likely.

Additionally, how do we know the model isn’t benchmaxxed to eliminate illegal moves.

For example, here is the list of games by Gemini-3-pro-preview. In 44 games it preformed 3 illegal moves (if I counted correctly) but won 5 because opponent forfeits due to illegal moves.

https://chessbenchllm.onrender.com/games?page=5&model=gemini...

I suspect the ratings here may be significantly inflated due to a flaw in the methodology.

EDIT: I want to suggest a better methodology here (I am not gonna do it; I really really really don’t care about this technology). Have the LLMs play rated engines and rated humans, the first illegal move forfeits the game (same rules apply to humans).

by emp17344 39 minutes ago

That’s a devastating benchmark design flaw. Sick of these bullshit benchmarks designed solely to hype AI. AI boosters turn around and use them as ammo, despite not understanding them.

by famouswaffles 9 minutes ago

Relax. Anyone who's genuinely interested in the question will see with a few searches that LLMs can play chess fine, although the post-trained models mostly seem to be regressed. Problem is people are more interested in validating their own assumptions than anything else.

https://arxiv.org/abs/2403.15498

https://arxiv.org/abs/2501.17186

https://github.com/adamkarvonen/chess_gpt_eval

by runarberg 11 minutes ago

I like this game between grok-4.1-fast and maia-1100 (engine, not LLM).

https://chessbenchllm.onrender.com/game/37d0d260-d63b-4e41-9...

This exact game has been played 60 thousand times on lichess. The peace sacrifice Grok performed on move 6 has been played 5 million times on lichess. Every single move Grok made is also the top played move on lichess.

This reminds me of Stefan Zweig’s The Royal Game where the protagonist survived Nazi torture by memorizing every game in a chess book his torturers dropped (excellent book btw. and I am aware I just committed Godwin’s law here; also aware of the irony here). The protagonist became “good” at chess, simply by memorizing a lot of games.

by famouswaffles 4 minutes ago

The LLMs that can play chess, i.e not make an illegal move every game do not play it simply by memorized plays.

by deadbabe an hour ago

Why do we care about this? Chess AI have long been solved problems and LLMs are just an overly brute forced approach. They will never become very efficient chess players.

The correct solution is to have a conventional chess AI as a tool and use the LLM as a front end for humanized output. A software engineer who proposes just doing it all via raw LLM should be fired.

by rodiger an hour ago

It's a proxy for generalized reasoning.

The point isn't that LLMs are the best AI architecture for chess.

by runarberg an hour ago

> It's a proxy for generalized reasoning.

And so for I am only convinced that they have only succeeded on appearing to have generalized reasoning. That is, when an LLM plays chess they are performing Searle’s Chinese room thought experiment while claiming to pass the Turing test

by iugtmkbdfil834 an hour ago

Hm.. but do they need it.. at this point, we do have custom tools that beat humans. In a sense, all LLM need is a way to connect to that tool ( and the same is true is for counting and many other aspects ).

by Windchaser an hour ago

Yeah, but you know that manually telling the LLM to operate other custom tools is not going to be a long-term solution. And if an LLM could design, create, and operate a separate model, and then return/translate its results to you, that would be huge, but it also seems far away.

But I'm ignorant here. Can anyone with a better background of SOTA ML tell me if this is being pursued, and if so, how far away it is? (And if not, what are the arguments against it, or what other approaches might deliver similar capacities?)

by yunyu 34 minutes ago

This has been happening for the past year on verifiable problems (did the change you made in your codebase work end-to-end, does this mathematical expression validate, did I win this chess match, etc...). The bulk of data, RL environment, and inference spend right now is on coding agents (or broadly speaking, tool use agents that can make their own tools).

Recent advances in mathematical/physics research have all been with coding agents making their own "tools" by writing programs: https://openai.com/index/new-result-theoretical-physics/

by BeetleB an hour ago

Are you saying an LLM can't produce a chess engine that will easily beat you?

by emp17344 38 minutes ago

Plagiarizing Stockfish doesn’t make me good at chess. Same principle applies.

by menaerus an hour ago

Did you already forget about the AlphaZero?

by obsidianbases1 3 minutes ago

And markdown is like the data streamed from the brain to the exoskeleton.

Exoskeleton dexterity is like something like coherence in the markdown stream.

by random3 13 minutes ago

I'll guess we'll se a lot of analogies and have to get used to it, although most will be off.

AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.

The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.

There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.

by qudat 8 minutes ago

> It can be a co-worker and it can also replace you and your whole team.

You drank the koolaide m8. It fundamentally cannot replace a single SWE and never will without fundamental changes to the model construction. If there is displacement, it’ll be short lived when the hype doesn’t match reality.

Go take a gander at openclaws codebase and feel at-ease with your job security.

I have seen zero evidence that the frontier model companies are innovating. All I see is full steam ahead on scaling what exists, but correct me if I’m wrong.

by delichon 3 hours ago

If we find an AI that is truly operating as an independent agent in the economy without a human responsible for it, we should kill it. I wonder if I'll live long enough to see an AI terminator profession emerge. We could call them blade runners.

by orphea 2 hours ago

  > an AI that is truly operating as an independent agent in the economy without a human responsible for it
Sounds like the "customer support" in any large company (think Google, for example), to be honest.
by WolfeReader 2 hours ago

It happened not too long ago! https://news.ycombinator.com/item?id=46990729

by Windchaser an hour ago

Was it ever verified that this was an independent AI?

by protocolture 25 minutes ago

Petition to make "AI is not X, but Y" articles banned or limited in some way.

by ottah 26 minutes ago

Make centaurs, not unicorns. The human is almost always going to be the strongest element in the loop, and the most efficient. Augmenting human skill will always outperform present day SOTA AI systems (assuming a competent human).

by pavlov 2 hours ago

> “The AI handles the scale. The human interprets the meaning.”

Claude is that you? Why haven’t you called me?

by ares623 2 hours ago

But the meaning has been scaled massively. So the human still kinda needs to handle the scale.

by yifanl an hour ago

AI is not an exoskeleton, it's a pretzel: It only tastes good if you douse it in lye.

by coffeefirst 23 minutes ago

Finally an AI take I can get behind.

by rishabhaiover an hour ago

it's a dry scone

by bGl2YW5j 2 hours ago

I like the analogy and will ponder it more. But it didn't take long before the article started spruiking Kasava's amazing solution to the problem they just presented.

by acjohnson55 an hour ago

> Autonomous agents fail because they don't have the context that humans carry around implicitly.

Yet.

This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.

by bwestergard an hour ago

Self-conscious efforts to formalize and concentrate information in systems controlled by firm management, known as "scientific management" by its proponents and "Taylorism" by many of its detractors, are a century old[1]. It has proven to be a constantly receding horizon.

[1]: https://en.wikipedia.org/wiki/Scientific_management

by xlerb an hour ago

Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.

Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.

The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.

by whyenot an hour ago

> LLMs aren’t built around truth as a first-class primitive.

neither are humans

> They optimize for next-token probability and human approval, not factual verification.

while there are outliers, most humans also tend to tell people what they want to hear and to fit in.

> factuality is emergent and contingent, not enforced by architecture.

like humans; as far as we know, there is no "factuality" gene, and we lie to ourselves, to others, in politics, scientific papers, to our partners, etc.

> If we’re going to treat them as coworkers or exoskeletons, we should be clear about that distinction.

I don't see the distinction. Humans exhibit many of the same behaviours.

by 13415 an hour ago

Strangely, the GP replaced the ChatGPT-generated text you're commenting on by an even worse and more misleading ChatGPT-generated one. Perhaps in order to make a point.

by kiba an hour ago

A much more useful tool is a technology that check for our blind spots and bugs.

For example fact checking a news article and making sure what's get reported line up with base reality.

I once fact check a virology lecture and found out that the professor confused two brothers as one individual.

I am sure about the professor having a super solid grasp of how viruses work, but errors like these probably creeps in all the time.

by emp17344 29 minutes ago

Ethical realists would disagree with you.

by dwheeler an hour ago

I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.

by givemeethekeys an hour ago

Closer to a really capable intern. Lots of potential for good and bad; needs to be watched closely.

by badgersnake an hour ago

I’ve been playing with qwen3-coder recently and that intern is definitely not getting hired, despite the rave reviews elsewhere.

by icedchai 28 minutes ago

Have you tried Claude Code with Opus or Sonnet 4.5? I've played around with a ton of open models and they just don't compare in terms of quality.

by hintymad an hour ago

Or software engineers are not coachmen while AI is diesel engine to horses. Instead, software engineers are mistrels -- they disappear if all they do is moving knowledge from one place to another.

by cranberryturkey 28 minutes ago

The exoskeleton metaphor is closer than most analogies but it still undersells one thing: exoskeletons augment existing capability along the same axis. AI augments along orthogonal axes too.

Running 17 products as an indie maker, I've found AI is less "do the same thing faster" and more "attempt things you'd never justify the time for." I now write throwaway prototypes to test ideas that would have died as shower thoughts. The bottleneck moved from "can I build this" to "should I build this" — and that's a judgment call AI makes worse, not better.

The real risk of the exoskeleton framing is that it implies AI makes you better at what you already do. In practice it makes you worse at deciding what to do, because the cost of starting is near zero but the cost of maintaining and shipping is unchanged.

by ge96 2 hours ago

It's funny developing AI stuff eg. RAG tools and being against AI at the same time, not drinking the kool aid I mean.

But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"

by xnx 2 hours ago

An electric bicycle for the mind.

by oxag3n 14 minutes ago

Owners intent is more like electric chair (for SWEs), but some people are trying to use it as office chair.

by clickety_clack an hour ago

Maybe more of a mobility scooter for the mind.

by xnx an hour ago

Indeed that may be more apt.

I like the ebike analogy because [on many ebikes] you can press the button to go or pedal to amplify your output.

by nancyminusone 2 hours ago

An electric chair for the mind?

by ares623 2 hours ago

I prefer mind vibe-rator.

by mikkupikku an hour ago

Exoskeletons sound cool but somebody please put an LLM into a spider tank.

by functionmouse an hour ago

blogger who fancies themselves an ai vibe code guru with 12 arms and a 3rd eye yet can't make a homepage that's not totally broken

How typical!

by blibble 2 hours ago

an exoskeleten made of cheese

by lukev an hour ago

Frankly I'm tired of metaphor-based attempts to explain LLMs.

Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.

These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.

A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.

But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.

by gf263 an hour ago

“The day you teach the child the name of the bird, the child will never see that bird again.”

by sibeliuss an hour ago

This utterly boring AI writing. Go, please go away...

by filipeisho an hour ago

By reading the title, I already know you did not try OpenClaw. AI employees are here.

by esafak 42 minutes ago

What are your digital 'employees' doing? Did they replace any humans or was there nobody before?

by BeetleB an hour ago

Looking into OpenClaw, I really do want to believe all the hype. However, it's frustrating that I can find very few, concrete examples of people showcasing their work with it.

Can you highlight what you've managed to do with it?

Data from: Hacker News, provided by Hacker News (unofficial) API