Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code (ai.georgeliu.com)

71 points by vbtechguy 4 hours ago

18 comments:

by trvz 2 hours ago

  ollama launch claude --model gemma4:26b
by datadrivenangel an hour ago

It's amazing how simple this is, and it just works if you have ollama and claude installed!

by jonplackett 2 hours ago

So wait what is the interaction between Gemma and Claude?

by unsnap_biceps 2 hours ago

lm studio offers an Anthropic compatible local endpoint, so you can point Claude code at it and it'll use your local model for it's requests, however, I've had a lot of problems with LM Studio and Claude code losing it's place. It'll think for awhile, come up with a plan, start to do it and then just halt in the middle. I'll ask it to continue and it'll do a small change and get stuck again.

Using ollama's api doesn't have the same issue, so I've stuck to using ollama for local development work.

by keerthiko an hour ago

Claude Code is fairly notoriously token inefficient as far as coding agent/harnesses go (i come from aider pre-CC). It's only viable because the Max subscriptions give you approximately unlimited token budget, which resets in a few hours even if you hit the limit. But this also only works because cloud models have massive token windows (1M tokens on opus right now) which is a bit difficult to make happen locally with the VRAM needed.

And if you somehow managed to open up a big enough VRAM playground, the open weights models are not quite as good at wrangling such large context windows (even opus is hardly capable) without basically getting confused about what they were doing before they finish parsing it.

by unsnap_biceps an hour ago

I use CC at work, so I haven't explored other options. Is there a better one to use locally? I presumed they were all going to be pretty similar.

by storus an hour ago

Can't you use Claude caveman mode?

https://github.com/JuliusBrussee/caveman

by vbtechguy 4 hours ago

Here is how I set up Gemma 4 26B for local inference on macOS that can be used with Claude Code.

by canyon289 2 hours ago

This is a nice writeup!

by Someone1234 2 hours ago

Using Claude Code seems like a popular frontend currently, I wonder how long until Anthropic releases an update to make it a little to a lot less turn-key? They've been very clear that they aren't exactly champions of this stuff being used outside of very specific ways.

by nerdix 5 minutes ago

I don't think there is any incentive to do so right now because the open models aren't as good. The vast majority of businesses are going to just pay the extra cost for access to a frontier model. The model is what gives them a competitive advantage, not the harness. The harness is a lot easier to replicate than Opus.

There are benefits too. Some developers might learn to use Claude Code outside of work with cheaper models and then advocate for using Claude Code at work (where their companies will just buy access from Anthropic, Bedrock, etc). Similar to how free ESXi licenses for personal use helped infrastructure folks gain skills with that product which created a healthy supply of labor and VMware evangelists that were eager to spread the gospel. Anthropic can't just give away access to Claude models because of cost so there is use in allowing alternative ways for developers to learn how to use Claude Code and develop a workflow with it.

by chvid an hour ago

Is it not about the same as using OpenCode?

And is running a local model with Claude Code actually usable for any practical work compared to the hosted Anthropic models?

by moomin an hour ago

Right now it suits them down to the ground. You pay for the product and you don’t cost their servers anything.

by phainopepla2 an hour ago

You don't pay anything to use Claude Code as a front end to non-Anthropic models

by quinnjh 32 minutes ago

so no subscription is needed?

by wyre 30 minutes ago

I think CC is popular because they are catering to the common denominator programmer and are going to continue to do that, not because CC is particularly turn-key.

by martinald an hour ago

Just FYI, MoE doesn't really save (V)RAM. You still need all weights loaded in memory, it just means you consult less per forward pass. So it improves tok/s but not vram usage.

by IceWreck an hour ago

It does if you use an inference engine where you can offload some of the experts from VRAM to CPU RAM. That means I can fit a 35 billion param MoE in let's say 12 GB VRAM GPU + 16 gigs of memory.

Data from: Hacker News, provided by Hacker News (unofficial) API