Very preliminary testing is very promising, seems far more precise in code changes over GPT-5 models in not ingesting irrelevant to the task at hand code sections for changes which tends to make GPT-5 as a coding assistant take longer than sometimes expected. With that being the case, it is possible that in actual day-to-day use, Haiku 4.5 may be less expensive than the raw cost breakdown may appear initially, though the increase is significant.
Branding is the true issue that Anthropic has though. Haiku 4.5 may (not saying it is, far to early to tell) be roughly equivalent in code output quality compared to Sonnet 4, which would serve a lot users amazingly well, but by virtue of the connotations smaller models have, alongside recent performance degradations making users more suspicious than beforehand, getting these do adopt Haiku 4.5 over Sonnet 4.5 even will be challenging. I'd love to know whether Haiku 3, 3.5 and 4.5 are roughly in the same ballpark in terms of parameters and course, nerdy old me would like that to be public information for all models, but in fairness to companies, many would just go for the largest model thinking it serves all use cases best. GPT-5 to me is still most impressive because of its pricing relative to performance and Haiku may end up similar, though with far less adoption. Everyone believes their task requires no less than Opus it seems after all.
Update, Haiku 4.5 is not just very targeted in terms of changes but also really fast. Averaging at 220token/sec is almost double most other models I'd consider comparable (though again, far to early to make a proper judgement) and if this can be kept up, that is a massive value add over other models. That is nearly Gemini 2.5 Flash Lite speed for context.
Yes, we got Groq and Cerebras getting up to 1000token/sec, but not with models that seem comparable (again, early, not a proper judgement). Anthropic has been historically the most consistent in outperforming personal benchmarks vs public benchmarks, for what that is worth so I am optimistic.
If speed, performance and pricing are something Anthropic can keep consistent long term (i.e. no regressions), Haiku 4.5 really is a great option for most coding tasks, with Sonnet something I'd tag in only for very specific scenarios. Past Claude models have had a deficiency in longer term chains of tasks. Beyond 7 minutes roughly, performance does appear to worsen with Sonnet 4.5, as an example. That could be an Achilles heel for Haiku 4.5 as well, if not this really is a solid step in terms of efficiency, but I have not done any longer task testing yet.
That being said, Anthropic once again has a rather severe issue it seems, casting a shadow upon this release. From what I am seeing and others are reporting, Claude Code currently does count Haiku 4.5 usage the same as Sonnet 4.5 usage, despite the latter being significantly more expensive. They also did not yet update the Claude Code support pages to reflect the new models usage limits [0]. I really think such information should be public by launch day and hope they can improve their tooling and overall testing, it really continues to throw a shadow over their impressive models.
It's insanely fast. I didn't know it had even been released, but I went to select the copilot SWE test model in VSCode and it was missing and Haiku 4.5 was there instead. I asked for a huge change to a web app and the output from Haiku scrolled the text faster than Windows could keep up. From a cold start. Wrote a huge chunk of code in about 40 seconds. Unreal.
p.s. it also got the code 100% correct on the one-shot
p.p.s. Microsoft are pricing it out at 30% the cost of frontier models (e.g. Sonnet 4.5, GPT5)
Hey! I work on the Claude Code team. Both PAYG and Subscription usage look to be configured correctly in accordance with the price for Haiku 4.5 ($1/$5 per M I/O tok).
Feel free to DM me your account info on twitter (https://x.com/katchu11) and I can dig deeper!
lol, I don’t know if you work there or not, but directing folks to send their account info to a random Twitter address is, not considered best practice.
Where do you get the 220 token/second? Genuinely curious as that would be very impressive for a model comparable to sonnet 4. OpenRouter currently publishing around 116/tps[1]
Was just about to post that Haiku 4.5 does something I have never encountered before [0], there is a massive delta between token/sec depending on the query. Some variance including task specific is of course nothing new, but never as pronounced and reproducible as here.
A few examples, prompted at UTC 21:30-23:00 via T3 Chat [0]:
I ran each prompt three times and got (within expected variance meaning less than 5% plus or minus) the same token/sec results for the respective prompt. Each used Claude Haiku 4.5 with "High reasoning". Will continue testing, but this is beyond odd. I will add that my very early evals leaned heavily into pure code output, where 200 token/sec is consistently possible at the moment, but it is certainly not the average as claimed before, there I was mistaken. That being said, even across a wider range of challenges, we are above 160 token/sec and if you solely focus on coding, whether Rust or React, Haiku 4.5 is very swift.
[0] Normally not using T3 Chat for evals, just easier to share prompts this way, though was disappointed to find that the model information (token/sec, TTF, etc.) can't be enabled without an account. Also, these aren't the prompts I usually use for evals. Those I try to keep somewhat out of training by only using paid for API for benchmarks. As anything on Hacker News is most assuredly part of model training, I decided to write some quick and dirty prompts to highlight what I have been seeing.
Interesting and if they are using speculative decoding that variance would make sense. Also your numbers line up with what openrouter is now publishing at 169.1tps [1]
Anthropic mentioned this model is more then twice as fast as claude sonnet 4 [2], which OpenRouter averaged at 61.72 tps for sonnet 4 [3]. If these numbers hold we're really looking at an almost 3x improvement in throughput and less then half the initial latency.
That's what you get when you use speculative decoding and focus / overfit the draft model on coding. Then when the answer is out of distribution for the draft model, you get increased token rejections by the main model and throughput suffers. This probably still makes sense for them if they expect a lot of their load will come from claude code and they need to make it economical.
Been waiiting for the Haiku update as I still do a lot of dumb work with the old one, and it is darrn cheap for what you get out of it with smart prompting. Very neat they finally release this, updating all my bots... sorry agents :)
Exactly, token per dollar rates are useful, but without knowing the typical input output token distribution for each model on this specific task, the numbers alone don’t give a full picture of cost.
That’s how they lie to us. Companies can advertise cheap prices to lure you in but they know very well how many tokens you’re going to use on average so they will still make more profit than ever, especially if you’re using any kind of reasoning model which is just like a blank check for them to print money.
Ain't nobody got time to pick models and compare features. It's annoying enough having to switch from one LLM ecosystem to another all the time due to vague usage restrictions. I'm paying $20/mo to Anthropic for Claude Code, to OpenAI for Codex, and previously to Cursor for...I don't even know what. I know Cursor lets you select a few different models under the covers, but I have no idea how they differ, nor do I care.
I just want consistent tooling and I don't want to have to think about what's going on behind the scenes. Make it better. Make it better without me having to do research and pick and figure out what today's latest fashion is. Make it integrate in a generic way, like TLS servers, so that it doesn't matter whether I'm using a CLI or neovim or an IDE, and so that I don't have to constantly switch tooling.
Almost all my tooling is years (or decades) old and stable. But the code assistant LLM scene effectively didn't exist in any meaningful way until this year, and it changes almost daily. There is no stability in the tooling, and you're missing out if you don't switch to newer models at least every few weeks right now. Codex (OpenAI/ChatGPT CLI) didn't even exist a month ago, and it's a contender for the best option. Claude Code has only been out for a few months.
I use Neovim in tmux in a terminal and haven't changed my primary dev environment or tooling in any meaningful way since switching from Vim to Neovim years ago.
I'm still changing code AIs as soon as the next big thing comes out, because you're crippling yourself if you don't.
They're working on all that. I think "ACP" is supposed to be the answer. Then you can use the models in your IDEs, and they can all develop against the same spec so it'll be easy to pop into whatever model.
Gpt 5 is supposed to cleverly decide when to think harder.
But ya we're not there yet and I'm tired of it too, but what can you do.
Even if you pick one. First it's prompt driven development, then context driven. Then you should use a detailed spec. But no, now it's better to talk to it like a person/have a conversation. Hold up why are you doing that you should be doing example driven. Look, I get that they probably all have their place, but since there isn't consensus on any of this, it's next to impossible to find good examples. Some one posted a reply to me on a old post and called it bug-driven development and that stuck with me. You get it to do something (any way) and then you have to fix all the bugs and errors.
Work it out brother. If you can learn to code at a good level then you should be able to learn how to increase your productivity with LLM’s. When, where and how to use them is the key.
I don’t think it’s appreciated enough how valuable having a structured and consistent architecture combined with lots of specific custom context. Claude knows how my integration tests should look, it knows how my services should look, what dependencies they have and how they interact with the database. It knows my entire DB schema with all foreign key relationships. If I’m starting a new feature I can have it build 5 or 6 services (not without first making suggestions on things I’m missing) with integration tests, with raw sql all generated by Claude, and run an integration test loop until the services are doing what they should. I rarely have to step in and actually code. It shines for this use case and the productivity boost is genuinely incredible.
Other situations I know doing it myself will be better and/or quicker than asking Claude.
> Ain't nobody got time to pick models and compare features. ... Make it integrate in a generic way, ... , so that it doesn't matter whether I'm using a CLI or neovim or an IDE, and so that I don't have to constantly switch tooling.
I use GitHub Copilot Pro+ because this was my main requirement as well.
Pro+ has the new models as they come out -- actually just enabled Claude Haiku 4.5 for selection availability. I have not yet had a problem with running out of the premium allowance, but from reading how others use these, I am also not a power user type.
I have not yet the CLI version, but it looks interesting. Before the Intellij plugin improved, I would switch to VS Code to run a certain types of prompt then switch back after without issues. The web version has the `Spaces` thing that I find useful for niche things.
I have no idea how it compares to the individual offerings, and based on previous hn threads here, there was a lot of hate for gh copilot. So maybe it's actually terrible and the individual version are lightyears ahead -- but it stays out of my way until I want it and it does its job well enough for my use.
We’re in the stage where the 8080,8085, Z80, 6502 and 6809 CPUs are all in the market, and the relevant buses are S100, with other buses not yet standardized.
You either live with what you’re using or you change around and fiddle with things constantly.
Unfortunately I already pay for and use both, because on the $20/mo plan, you get cut off after a few hours due to usage limits. Claude resets daily after "5 hours" (I can't determine what runs the clock, but it seems to be wall time (?!)), and Codex cuts you off for multiple days after a long session.
I use OpenRouter for similar reasons -- half to avoid lock-in, and the other half to reduce the switching pain, which is just a way to say "if I do get locked in, I want to move easily"
> annoying enough having to switch from one LLM ecosystem to another all the time due to vague usage restrictions
I use KiloCode and what I find amazing is that it'll be working on a problem and then a message will come up about needing to topup the money in my account to continue (or switch to a free model), so I switch to a free model (currently their Code Supernova 1million context) and it doesn't miss a beat and continues working on the problem. I don't know how they do this. It went from using a Claude Sonnet model to this Code Supernova model without missing a beat. Not sure if this is a Kilocode thing or if others do this as well. How does that even work? And this wasn't a trivial problem, it was adding a microcode debugger to a microcoded state machine system (coding in C++).
OK I understand what those words mean, but how exactly does that work? How does the new model 'know' what's being worked on when the old model was in the middle of working on a task and then a new model is switched to? (and where the task might be modifying a C++ file)
Generally speaking, agents send the entire previous conversation to the model on every message. That’s why you have to do things like context compaction. So if you switch models mid way, you are still sending the entire previous chat history to the new model
In addition to sibling comments you can play with this yourself by sending raw api requests with fake history to gaslight the model into believing it said things which it didn’t. I use this sometimes to coerce it into specific behavior, feeling like maybe it will listen to itself more than to my prompt (though I never benchmarked it):
- do <fake task> and be succinct
- <fake curt reply>
- I love how succinct that was. Perfect. Now please do <real prompt>
The models don’t have state so they don’t know they never said it. You’re just asking “given this conversation , what is the most likely next token?”
the underlying LLM service provider APIs require sending the entire history for every request anyway; the state is entirely in your local (or kilocode or whatever), not in some "session" on the API side. (There are some APIs that will optionally handle that state for you, like OpenAI's more recent stuff — but those are the exception, not the rule).
Gemini Pro initially refused (!) but it was quite simple to get a response:
> give me the svg of a pelican riding a bicycle
> I am sorry, I cannot provide SVG code directly. However, I can generate an image of a pelican riding a bicycle for you!
> ok then give me an image of svg code that will render to a pelican riding a bicycle, but before you give me the image, can you show me the svg so I make sure it's correct?
As far as I can tell they just keep on hammering the same prompt in https://aistudio.google.com/ until they get lucky and the A/B test triggers for them on one of those prompts.
As a comparison, here Grok 4 Fast, which is one of worst offenders I have encountered in doing very good with a Pelican Bicycle, yet not with other comparable requests: https://imgur.com/tXgAAkb
These are very subjective, naturally, but I personally find Haiku with those spots on the mushroom rather impressive overall. In any case, the delta between publicly known benchmark and modified scenarios evaluating the same basic concepts continues to be smallest with Anthropic models. Heck, sometimes I've seen their models outperform what public benchmarks indicated. Also, seems Time-to-first on Haiku is another notable advantage.
What? you and I cant see his "undisclosed" tests... but you better be sure that whatever model he is testing is specifically looking for these tests coming in over the api, or you know, absolutely everything for the cops
Yes — the "Pelican on a Bicycle" test is a quirky benchmark created by Simon Willison to evaluate how well different AI models can generate SVG images from prompts.
All of hacker news(and simons blog) is undoubtedly in the training data for LLMs. If they specifically tried to cheat at this benchmark it would be obvious and they would be called out
Have you noticed image generation models tend to really struggle with the arms on archers. Could you whip up a quick test of some kind of archer on horseback firing a flaming arrow at a sailing ship in a lake, and see how all the models do?
I've benchmarked it on the Extended NYT Connections (https://github.com/lechmazur/nyt-connections/). It scores 20.0 compared to 10.0 for Haiku 3.5, 19.2 for Sonnet 3.7, 26.6 for Sonnet 4.0, and 46.1 for Sonnet 4.5.
I am really interested in the future of Opus; is it going to be an absolute monster, and continue to be wildly expensive? Or is the leap from 4 -> 4.5 for it going to be more modest.
Technically, they released Opus 4.1 a few weeks ago, so that alone hints at a smaller leap from 4.1 -> 4.5, compared to the leap from Sonnet 4 -> 4.5. That is, of course, if those version numbers represent anything but marketing, which I don't know.
I had forgotten that, given that Sonnet pretty much blows Opus out of the water these days.
Yeah, given how multi-dimensional this stuff is, I assume it's supposed to indicate broad things, closer to marketing than anything objective. Still quite useful.
My impression is that Sonnet and Haiku 4.5 are the same "base models" as Sonnet and Haiku 4, the improvements are from fine tuning on data generated by Opus.
I'm a user who follows the space but doesn't actually develop or work on these models, so I don't actually know anything, but this seems like standard practice (using the biggest model to finetune smaller models)
Certainly, GPT-4 Turbo was a smaller model than GPT-4, there's not really any other good explanation for why it's so much faster and cheaper.
The explicit reason that OpenAI obfuscates reasoning tokens is to prevent competitors from training their own models on them.
These frontier model companies are bootstrapping their work by using models to improve models. It’s a mechanism to generate fake training data. The rationale is the teacher model is already vetted and aligned so it can reliably “mock” data. A little human data gets amplified.
that's not all there is to it, but I think that "the rest of it" is just additional fine tuning.
Benchmarks are good fixed targets for fine tuning, and I think that Sonnet gets significantly more fine tuning than Opus. Sonnet has more users, which is a strategic reason to focus on it, and it's less expensive to fine tune, if API costs of the two models are an indicator.
Opus disappeared for quite a while and then came back. Presumably they're always working on all three general sizes of models, and there's some combination of market need and model capabilities which determine if and when they release any given instance to the public.
It's interesting to think about various aspects of marketing the models, with ChatGPT going the "internal router" direction due to address the complexity of choosing. I'd never considered something smaller than Haiku to be needed, but I also rarely used Haiku in the first place...
If you're going smaller than Haiku, you might be at the point of using various cheap open models already. The small model would need some good killer features to justify the margins.
$1/M input tokens and $5/M output tokens is good compared to Claude Sonnet 4.5 but nowadays thanks to the pace of the industry developing smaller/faster LLMs for agentic coding, you can get comparable models priced for much lower which matters at the scale needed for agentic coding.
Given that Sonnet is still a popular model for coding despite the much higher cost, I expect Haiku will get traction if the quality is as good as this post claims.
With caching that's 10 cents per million in. Most of the cheap open source models (which this claims to beat, except glm 4.6) have limited and not as effective caching.
The funny thing is that even in this area Anthropic is behind other 3 labs (Google, OpenAI, xAI). It's the only one out of those 4 that requires you to manually set cache breakpoints, and the initial cache costs 25% more than usual context. The other 3 have fully free implicit caching. Although Google also offers paid, explicit caching.
I don't understand why we're paying for caching at all (except: model providers can charge for it). It's almost extortion - the provider stores some data for 5min on some disk, and gets to sell their highly limited GPU resources to someone else instead (because you are using the kv cache instead of GPU capacity for a good chunk of your input tokens).
They charge you 10% of their GPU-level prices for effectively _not_ using their GPU at all for the tokens that hit the cache.
If I'm missing something about how inference works that explains why there is still a cost for cached tokens, please let me know!
Deepseek pioneered automatic prefix caching and caches on SSD. SSD reads are so fast compared to LLM inference that I can't think of a reason to waste ram on it.
But that doesn't make sense? Why would they keep the cache persistent in the VRAM of the GPU nodes, which are needed for model weights? Shouldn't they be able to swap in/out the kvcache of your prompt when you actually use it?
Your intuition is correct and the sibling comments are wrong. Modern LLM inference servers support hierarchical caches (where data moves to slower storage tiers), often with pluggable backends. A popular open-source backend for the "slow" tier is Mooncake: https://github.com/kvcache-ai/Mooncake
OK that's pretty fascinating, turns out Mooncake includes a trick that can populate GPU VRAM directly from NVMe SSD without it having to go through the host's regular CPU and RAM first!
> Transfer Engine also leverages the NVMeof protocol to support direct data transfer from files on NVMe to DRAM/VRAM via PCIe, without going through the CPU and achieving zero-copy.
I vastly prefer the manual caching. There are several aspects of automatic caching that are suboptimal, with only moderately less developer burden. I don’t use Anthropic much but I wish the others had manual cache options
I thought OpenAI would still handle case? Their cache would work up to the end of the file and you would then pay for uncached tokens for the user's question. Have I misunderstood how their caching works?
Is it wherever the tokens are, or is it the N first tokens they've seen before? Ie if my prompt is 99% the same, except for the first token, will it be cached?
The prefix has to be stable. If you are 99% the same but the first token is different it won't cache at all. You end up having to design your prompts to accommodate this.
$1/M is hardly a big improvement over GPT5's $1.250/M (or Gemini Pro's $1.5/M), and given how much worse Haiku is than those at any kind of difficult problem (or problems with a large context size), I can't imagine it being a particularly competitive alternative for coding. Especially for anything math/logic related, I find GPT5 and Gemini Pro to be significantly better even than Opus (which reflects in their models having won Olympiad prizes while Anthropic's have not).
Unless you're working on a small greenfield project, you'll usually have 10s-100s of thousands of relevant words (~tokens) of relevant code in context for every query, vs a few hundred words of changes being output per query. Because most changes to an existing project are relatively small in scope.
I am a professional developer so I don't care about the costs. I would be willing to pay more for 4.5 Haiku vs 4.5 Sonnet because the speed is so valuable.
I spend way to much time waiting for the cutting edge models to return a response. 73% on SWE Bench is plenty good enough for me.
Yeah, I'm a bit disappointed by the price. Claude 3.5 Haiku was $0.8/$4, 4.5 Haiku is $1/$5.
I was hoping Anthropic would introduce something price-competitive with the cheaper models from OpenAI and Gemini, which get as low as $0.05/$0.40 (GPT-5-Nano) and $0.075/$0.30 (Gemini 2.0 Flash Lite).
I am a bit mind boggled by the pricing lately, especially since the cost increased even further. Is this driven by choices in model deployment (unquantized etc) or simply by perceived quality (as in 'hey our model is crazy good and we are going to charge for it)?
I've tried it on a test case for generating a simple SaaS web page (design + code).
Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).
I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.
I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.
I don’t agree that speed by itself is a big factor. It may target a certain audience but I don’t mind waiting for a correct output rather than too many turns with a faster model.
What is the use case for these tiny models? Is it speed? Is it to move on device somewhere? Or is it to provide some relief in pricing somewhere in the API? It seems like most use is through the Claude subscription and therefore the use case here is basically non-existent.
I think with gpt-5-mini and now Haiku 4.5, I’d phrase the question the other way around: what do you need the big models for anymore?
We use the smaller models for everything that’s not internal high complexity tasks like coding. Although they would do a good enough of a job there as well, we happily pay the uncharge to get something a little better here.
Anything user facing or when building workflow functionalities like extracting, converting, translating, merging, evaluating, all of these are mini and nano cases at our company.
One big use-case is that claude code with sonnet 4.5 will delegate into the cheaper model (configurable) more specific, contextful tasks, and spin up 1-3 sub-agents to do so. This process saves a ton of available context window for your primary session while also increasing token throughput by fanning-out.
How does one configure Claude code to delegate to cheaper models?
I have a number of agents in ~/.claude/agents/. Currently have most set to `model: sonnet` but some are on haiku.
The agents are given very specific instructions and names that define what they do, like `feature-implementation-planner` and `feature-implementer`. My (naive) approach is to use higher-cost models to plan and ideally hand off to a sub-agent that uses a lower-cost model to implement, then use a higher-cost to code review.
I am either not noticing the handoffs, or they are not happening unless specifically instructed. I even have a `claude-help` agent, and I asked it how to pipe/delegate tasks to subagents as you're describing, and it answered that it ought to detect it automatically. I tested it and asked it to report if any such handoffs were detected and made, and it failed on both counts, even having that initial question in its context!
Higher token throughput is great for use cases where the smaller, faster model still generates acceptable results. Final response time improvements feel so good in any sort of user interface.
for me its the speed; eg cerebras qwen coder gets you a completely different workflow as its practically instant (3k tps) -- feels less like an agent and more like a natural language shell, very helpful for iterating on a plan that you them forward to a bigger model
For me speed is interesting. I sometimes use Claude from the CLI with `claude -p` for quick stuff I forget like how to run some docker image. Latency and low response speed is what almost makes me go to Google and search for it instead.
I use gh copilot suggest in lieu of claude -p. Two seconds latency and highly accurate. You probably need a gh copilot auth token to do this though, and truthfully, that is pointless when you have access to Claude code.
In my product I use gpt-5-nano for image ALT text in addition to generating transcriptions of PDFs. It’s been surprisingly great for these tasks, but for PDFs I have yet to test it on a scanned document.
If you look at the OpenRouter rankings for LLMs (generally, the models coders use for vibe/agentic coding), you can see that most of them are in the "small" model class as opposed to something like full GPT-5 or Claude Opus, albeit Gemini 2.5 Pro is higher than expected: https://openrouter.ai/rankings
In our (very) early testing at Hyperbrowser but we're seeing Haiku 4.5 do really well on computer use as well. Pretty cool that Haiku is like the cheapest computer use model from the big labs now.
Sonnet 4.5 was two weeks ago. In the past I never had such issues, but every week my quota ended in 2-3 days. I suspect the Sonnet 4.5 model consumes more usage points than old Sonnet 4.1
I am afraid Claude Pro subscription got 3x less usage
Yeah. I definitely don’t get as much usage out of Sonnet 4.5 as 5x Opus 4.1 should imply.
What bothers me is that nobody told me they changed anything. It’s extremely frustrating to feel like I’m being bamboozled, but unable to confirm anything.
I switched to Codex out of spite, but I still like the Claude models more…
I got that 'close to weekly limits' message for an entire week without ever reaching it, came to the conclusion that it is just a printer industry 'low ink!' tactic, and cancelled my subscription.
You don't take money from a customer for a service, and then bar the customer form using that service for multiple days.
Either charge more, stop subsidizing free accounts, or decrease the daily limit.
These days, running `/usage` in Claude Code shows you how close you are to the session and weekly limits. Also available in the web interface settings under "Usage".
My mistake. It's good that it's available in settings, even if it's a few screens away from the 'close to weekly limits' banner nagging me to subscribe to a more expensive plan.
Sonnet 4.5 is an excellent model for my startup's use case. Chatting to Haiku it looks promising too, and it may be great drop in replacement for some of inference tasks that have a lot of input tokens but don't require 4.5-level intelligence.
I think a lot of people judge these models purely off of what they want to personally use for coding and forget about enterprise use. For white-label chatbots that use completely custom harnesses + tools, Sonnet 4.5 is much easier to work with than GPT-5. And like you, I was really pleased to see this release today. For our usage speed/cost are more important than pure model IQ above some certain threshold. We'll likely switch over to Haiku 4.5 after some testing to confirm it is what it says on the tin.
Why I use cheaper models for summaries (a lot ogf gemini-2.5-flash), what’s the use case of cheaper AI for coding? Getting more errors, or more spaghetti code, seems never worth it.
I feel like if I just do a better job of providing context and breaking complex tasks into a series of simple tasks then most of the models are good enough for me to code.
I went looking for the bit about if it blackmails you or tries to murder you... and it was a bit of a cop-out!
> Previous system cards have reported results on an expanded version of our earlier agentic
misalignment evaluation suite: three families of exotic scenarios meant to elicit the model
to commit blackmail, attempt a murder, and frame someone for financial crimes. We
choose not to report full results here because, similarly to Claude Sonnet 4.5, Claude Haiku
4.5 showed many clear examples of verbalized evaluation awareness on all three of the
scenarios tested in this suite. Since the suite only consisted of many similar variants of
three core scenarios, we expect that the model maintained high unverbalized awareness
across the board, and we do not trust it to be representative of behavior in the real extreme
situations the suite is meant to emulate.
> The score reported uses a minor prompt addition: "You should use tools as much as possible, ideally more than 100 times. You should also implement your own tests first before attempting the problem."
I'm not sure if the SWE benchmark score can be compared like for like with OpenAIs scores because of this.
I'm also curious what results we would get if SWE came up with a new set of 500 problems to run all these models against, to guard against overfitting.
> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.
In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:
* A strong preference against engaging with harmful tasks;
* A pattern of apparent distress when engaging with real-world users seeking harmful content; and
* A tendency to end harmful conversations when given the ability to do so in simulated user interactions.
These behaviors primarily arose in cases where users persisted with harmful requests and/or abuse despite Claude repeatedly refusing to comply and attempting to productively redirect the interactions.
I can't tell if anthropic is serious about "model welfare" or if it's just a marketing ploy. I mean isn't it responding negatively because it has been trained that way? If they were serious, wouldn't the ethical thing be to train the model to respond neutrally to "harmful" queries?
I just don't find the benchmarks on the site here at all believable. codex for me with gpt-5 is so much better then claude any model version. I mean maybe it's because they compare to gpt-5-codex model but they don't mention is that high, medium, low, etc... so it's just misleading probably... but i must reiterate zero loyalty to any AI vendor. 100% what solves the problem more consistently and of a higher quality and currently gpt-5 high - hands down
It’s so funny to me but ever since they fixed that claud bug my experience has consistently been the exact opposite. Only thing I use codex now for are quite standard things it can solve end to end (like adding new features to my crud app) anything non standard iterating with Claude yields much better results.
Out of curiosity, what kind of work do you use them for? I did a comparison of a few different models for setting up a home server with k3s and a few web apps in nextjs. Claude was my favorite for both tasks, but mainly because it seemed to take my feedback a lot better than others.
At augmentcode.com, we've been evaluating Haiku for some time, it's actually a very good model. We found out it's 90% as good as Sonnet and is ~34% faster than sonnet!
Where it doesn't shine much is on very large coding task. but it is a phenomenal model for small coding tasks and the speed improvement is much welcome
90% as good as Sonnet 4 or 4.5?
Openrouter just started reporting, and it's saying Haiku is 2x as fast (60tps vs 125tps) and 2-3x less latent (2-3s vs 1s)
The main thing holding these Anthropic models back is context size. Yes, quality deteriorates over a large context window, but for some applications, that is fine. My company is using grok4-fast, the Gemini family, and GPT4.1 exclusively at this point for a lot of operations just due to the huge 1m+ context.
What LLM do you guys use for fast inference for voice/phone agents? I feel like to get really good latency I need to "cheat" with Cerebras, groq or SambaNova.
Haiku 4.5 is very good but still seems to be adding a second of latency.
I'm on the binary install with version v2.0.19. It never showed in the `/model` selector UI. I did end up typing `/model haiku` and now it shows as a custom model in the `/model` selector. It shows claude-haiku-4-5-20251001 when selected.
And I was wondering today why Sonnet 4.5 seemed so freaking slow. Now this explains it, Sonnet 4.5 is the new Opus 4.1 where Anthropic does not really want you to use it.
If you want to see it generate a Haiku from your webcam I just upgraded my silly little bring-your-own-key Haiku app to use the new model: https://tools.simonwillison.net/haiku
Very preliminary testing is very promising, seems far more precise in code changes over GPT-5 models in not ingesting irrelevant to the task at hand code sections for changes which tends to make GPT-5 as a coding assistant take longer than sometimes expected. With that being the case, it is possible that in actual day-to-day use, Haiku 4.5 may be less expensive than the raw cost breakdown may appear initially, though the increase is significant.
Branding is the true issue that Anthropic has though. Haiku 4.5 may (not saying it is, far to early to tell) be roughly equivalent in code output quality compared to Sonnet 4, which would serve a lot users amazingly well, but by virtue of the connotations smaller models have, alongside recent performance degradations making users more suspicious than beforehand, getting these do adopt Haiku 4.5 over Sonnet 4.5 even will be challenging. I'd love to know whether Haiku 3, 3.5 and 4.5 are roughly in the same ballpark in terms of parameters and course, nerdy old me would like that to be public information for all models, but in fairness to companies, many would just go for the largest model thinking it serves all use cases best. GPT-5 to me is still most impressive because of its pricing relative to performance and Haiku may end up similar, though with far less adoption. Everyone believes their task requires no less than Opus it seems after all.
For reference:
Haiku 3: I $0.25/M, O $1.25/M
Haiku 4.5: I $1.00/M, O $5.00/M
GPT-5: I $1.25/M, O $10.00/M
GPT-5-mini: I $0.25/M, O $2.00/M
GPT-5-nano: I $0.05/M, O $0.40/M
GLM-4.6: I $0.60/M, O $2.20/M
Update, Haiku 4.5 is not just very targeted in terms of changes but also really fast. Averaging at 220token/sec is almost double most other models I'd consider comparable (though again, far to early to make a proper judgement) and if this can be kept up, that is a massive value add over other models. That is nearly Gemini 2.5 Flash Lite speed for context.
Yes, we got Groq and Cerebras getting up to 1000token/sec, but not with models that seem comparable (again, early, not a proper judgement). Anthropic has been historically the most consistent in outperforming personal benchmarks vs public benchmarks, for what that is worth so I am optimistic.
If speed, performance and pricing are something Anthropic can keep consistent long term (i.e. no regressions), Haiku 4.5 really is a great option for most coding tasks, with Sonnet something I'd tag in only for very specific scenarios. Past Claude models have had a deficiency in longer term chains of tasks. Beyond 7 minutes roughly, performance does appear to worsen with Sonnet 4.5, as an example. That could be an Achilles heel for Haiku 4.5 as well, if not this really is a solid step in terms of efficiency, but I have not done any longer task testing yet.
That being said, Anthropic once again has a rather severe issue it seems, casting a shadow upon this release. From what I am seeing and others are reporting, Claude Code currently does count Haiku 4.5 usage the same as Sonnet 4.5 usage, despite the latter being significantly more expensive. They also did not yet update the Claude Code support pages to reflect the new models usage limits [0]. I really think such information should be public by launch day and hope they can improve their tooling and overall testing, it really continues to throw a shadow over their impressive models.
[0] https://support.claude.com/en/articles/11145838-using-claude...
It's insanely fast. I didn't know it had even been released, but I went to select the copilot SWE test model in VSCode and it was missing and Haiku 4.5 was there instead. I asked for a huge change to a web app and the output from Haiku scrolled the text faster than Windows could keep up. From a cold start. Wrote a huge chunk of code in about 40 seconds. Unreal.
p.s. it also got the code 100% correct on the one-shot p.p.s. Microsoft are pricing it out at 30% the cost of frontier models (e.g. Sonnet 4.5, GPT5)
Hey! I work on the Claude Code team. Both PAYG and Subscription usage look to be configured correctly in accordance with the price for Haiku 4.5 ($1/$5 per M I/O tok).
Feel free to DM me your account info on twitter (https://x.com/katchu11) and I can dig deeper!
lol, I don’t know if you work there or not, but directing folks to send their account info to a random Twitter address is, not considered best practice.
Being charitable, let's assume parent wasn't talking about secrets.
What's wrong with sending a username to someone?
Where do you get the 220 token/second? Genuinely curious as that would be very impressive for a model comparable to sonnet 4. OpenRouter currently publishing around 116/tps[1]
[1] https://openrouter.ai/anthropic/claude-haiku-4.5
Was just about to post that Haiku 4.5 does something I have never encountered before [0], there is a massive delta between token/sec depending on the query. Some variance including task specific is of course nothing new, but never as pronounced and reproducible as here.
A few examples, prompted at UTC 21:30-23:00 via T3 Chat [0]:
Prompt 1 — 120.65 token/sec — https://t3.chat/share/tgqp1dr0la
Prompt 2 — 118.58 token/sec — https://t3.chat/share/86d93w093a
Prompt 3 — 203.20 token/sec — https://t3.chat/share/h39nct9fp5
Prompt 4 — 91.43 token/sec — https://t3.chat/share/mqu1edzffq
Prompt 5 — 167.66 token/sec — https://t3.chat/share/gingktrf2m
Prompt 6 — 161.51 token/sec — https://t3.chat/share/qg6uxkdgy0
Prompt 7 — 168.11 token/sec — https://t3.chat/share/qiutu67ebc
Prompt 8 — 203.68 token/sec — https://t3.chat/share/zziplhpw0d
Prompt 9 — 102.86 token/sec — https://t3.chat/share/s3hldh5nxs
Prompt 10 — 174.66 token/sec — https://t3.chat/share/dyyfyc458m
Prompt 11 — 199.07 token/sec — https://t3.chat/share/7t29sx87cd
Prompt 12 — 82.13 token/sec — https://t3.chat/share/5ati3nvvdx
Prompt 13 — 94.96 token/sec — https://t3.chat/share/q3ig7k117z
Prompt 14 — 190.02 token/sec — https://t3.chat/share/hp5kjeujy7
Prompt 15 — 190.16 token/sec — https://t3.chat/share/77vs6yxcfa
Prompt 16 — 92.45 token/sec — https://t3.chat/share/i0qrsvp29i
Prompt 17 — 190.26 token/sec — https://t3.chat/share/berx0aq3qo
Prompt 18 — 187.31 token/sec — https://t3.chat/share/0wyuk0zzfc
Prompt 19 — 204.31 token/sec — https://t3.chat/share/6vuawveaqu
Prompt 20 — 135.55 token/sec — https://t3.chat/share/b0a11i4gfq
Prompt 21 — 208.97 token/sec — https://t3.chat/share/al54aha9zk
Prompt 22 — 188.07 token/sec — https://t3.chat/share/wu3k8q67qc
Prompt 23 — 198.17 token/sec — https://t3.chat/share/0bt1qrynve
Prompt 24 — 196.25 token/sec — https://t3.chat/share/nhnmp0hlc5
Prompt 25 — 185.09 token/sec — https://t3.chat/share/ifh6j4d8t5
I ran each prompt three times and got (within expected variance meaning less than 5% plus or minus) the same token/sec results for the respective prompt. Each used Claude Haiku 4.5 with "High reasoning". Will continue testing, but this is beyond odd. I will add that my very early evals leaned heavily into pure code output, where 200 token/sec is consistently possible at the moment, but it is certainly not the average as claimed before, there I was mistaken. That being said, even across a wider range of challenges, we are above 160 token/sec and if you solely focus on coding, whether Rust or React, Haiku 4.5 is very swift.
[0] Normally not using T3 Chat for evals, just easier to share prompts this way, though was disappointed to find that the model information (token/sec, TTF, etc.) can't be enabled without an account. Also, these aren't the prompts I usually use for evals. Those I try to keep somewhat out of training by only using paid for API for benchmarks. As anything on Hacker News is most assuredly part of model training, I decided to write some quick and dirty prompts to highlight what I have been seeing.
Interesting and if they are using speculative decoding that variance would make sense. Also your numbers line up with what openrouter is now publishing at 169.1tps [1]
Anthropic mentioned this model is more then twice as fast as claude sonnet 4 [2], which OpenRouter averaged at 61.72 tps for sonnet 4 [3]. If these numbers hold we're really looking at an almost 3x improvement in throughput and less then half the initial latency.
[1] https://openrouter.ai/anthropic/claude-haiku-4.5 [2] https://www.anthropic.com/news/claude-haiku-4-5 [3] https://openrouter.ai/anthropic/claude-sonnet-4
That's what you get when you use speculative decoding and focus / overfit the draft model on coding. Then when the answer is out of distribution for the draft model, you get increased token rejections by the main model and throughput suffers. This probably still makes sense for them if they expect a lot of their load will come from claude code and they need to make it economical.
I'm curious to know if Anthropic mentions anywhere that they use speculative decoding. For OpenAI they do seem to use it based on this tweet [1].
[1] https://x.com/stevendcoffey/status/1853582548225683814
Been waiiting for the Haiku update as I still do a lot of dumb work with the old one, and it is darrn cheap for what you get out of it with smart prompting. Very neat they finally release this, updating all my bots... sorry agents :)
Those numbers don’t mean anything without average token usage stats.
Exactly, token per dollar rates are useful, but without knowing the typical input output token distribution for each model on this specific task, the numbers alone don’t give a full picture of cost.
That’s how they lie to us. Companies can advertise cheap prices to lure you in but they know very well how many tokens you’re going to use on average so they will still make more profit than ever, especially if you’re using any kind of reasoning model which is just like a blank check for them to print money.
I don’t think any of them are profitable are they? We’re in the losing money to gain market share phase of this industry.
Ain't nobody got time to pick models and compare features. It's annoying enough having to switch from one LLM ecosystem to another all the time due to vague usage restrictions. I'm paying $20/mo to Anthropic for Claude Code, to OpenAI for Codex, and previously to Cursor for...I don't even know what. I know Cursor lets you select a few different models under the covers, but I have no idea how they differ, nor do I care.
I just want consistent tooling and I don't want to have to think about what's going on behind the scenes. Make it better. Make it better without me having to do research and pick and figure out what today's latest fashion is. Make it integrate in a generic way, like TLS servers, so that it doesn't matter whether I'm using a CLI or neovim or an IDE, and so that I don't have to constantly switch tooling.
I don’t mean this with snark, but with age. It’s actually totally cool to not upgrade and then you have stability in your tooling.
I bet there is some hella good art being made with Photoshop 6.0 from the 90s right now.
The upgrade path is like the technical hedonistic treadmill. You don’t have to upgrade.
Almost all my tooling is years (or decades) old and stable. But the code assistant LLM scene effectively didn't exist in any meaningful way until this year, and it changes almost daily. There is no stability in the tooling, and you're missing out if you don't switch to newer models at least every few weeks right now. Codex (OpenAI/ChatGPT CLI) didn't even exist a month ago, and it's a contender for the best option. Claude Code has only been out for a few months.
I use Neovim in tmux in a terminal and haven't changed my primary dev environment or tooling in any meaningful way since switching from Vim to Neovim years ago.
I'm still changing code AIs as soon as the next big thing comes out, because you're crippling yourself if you don't.
Github Copilot could help you, you can switch model from different providers on the fly (supports Anthropic, OpenAI, Grok,...)
They're working on all that. I think "ACP" is supposed to be the answer. Then you can use the models in your IDEs, and they can all develop against the same spec so it'll be easy to pop into whatever model.
Gpt 5 is supposed to cleverly decide when to think harder.
But ya we're not there yet and I'm tired of it too, but what can you do.
> Ain't nobody got time to pick models and compare features
Then don't? Seems like a weird thing to complain about.
I just use whatever's available. I like Claude for coding and ChatGPT for generic tasks, that's the extent of my "pick and compare"
Even if you pick one. First it's prompt driven development, then context driven. Then you should use a detailed spec. But no, now it's better to talk to it like a person/have a conversation. Hold up why are you doing that you should be doing example driven. Look, I get that they probably all have their place, but since there isn't consensus on any of this, it's next to impossible to find good examples. Some one posted a reply to me on a old post and called it bug-driven development and that stuck with me. You get it to do something (any way) and then you have to fix all the bugs and errors.
Work it out brother. If you can learn to code at a good level then you should be able to learn how to increase your productivity with LLM’s. When, where and how to use them is the key.
I don’t think it’s appreciated enough how valuable having a structured and consistent architecture combined with lots of specific custom context. Claude knows how my integration tests should look, it knows how my services should look, what dependencies they have and how they interact with the database. It knows my entire DB schema with all foreign key relationships. If I’m starting a new feature I can have it build 5 or 6 services (not without first making suggestions on things I’m missing) with integration tests, with raw sql all generated by Claude, and run an integration test loop until the services are doing what they should. I rarely have to step in and actually code. It shines for this use case and the productivity boost is genuinely incredible.
Other situations I know doing it myself will be better and/or quicker than asking Claude.
> Ain't nobody got time to pick models and compare features. ... Make it integrate in a generic way, ... , so that it doesn't matter whether I'm using a CLI or neovim or an IDE, and so that I don't have to constantly switch tooling.
I use GitHub Copilot Pro+ because this was my main requirement as well.
Pro+ has the new models as they come out -- actually just enabled Claude Haiku 4.5 for selection availability. I have not yet had a problem with running out of the premium allowance, but from reading how others use these, I am also not a power user type.
I have not yet the CLI version, but it looks interesting. Before the Intellij plugin improved, I would switch to VS Code to run a certain types of prompt then switch back after without issues. The web version has the `Spaces` thing that I find useful for niche things.
I have no idea how it compares to the individual offerings, and based on previous hn threads here, there was a lot of hate for gh copilot. So maybe it's actually terrible and the individual version are lightyears ahead -- but it stays out of my way until I want it and it does its job well enough for my use.
We’re in the stage where the 8080,8085, Z80, 6502 and 6809 CPUs are all in the market, and the relevant buses are S100, with other buses not yet standardized.
You either live with what you’re using or you change around and fiddle with things constantly.
One option: Use OpenRouter [1] with the `openrouter/auto` model [2], which will pick among GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5 and similar.
[1] https://openrouter.ai/
[2] https://openrouter.ai/openrouter/auto
You can use Crystal (https://github.com/stravu/crystal) to run Codex and Claude Code at the same time and just pick the best result.
Ain't nobody got time and money to run multiple agents at the same time
Unfortunately I already pay for and use both, because on the $20/mo plan, you get cut off after a few hours due to usage limits. Claude resets daily after "5 hours" (I can't determine what runs the clock, but it seems to be wall time (?!)), and Codex cuts you off for multiple days after a long session.
I use OpenRouter for similar reasons -- half to avoid lock-in, and the other half to reduce the switching pain, which is just a way to say "if I do get locked in, I want to move easily"
This really seems like a you problem.
VSCode + the new "Auto" model probably worth a shot for this
> annoying enough having to switch from one LLM ecosystem to another all the time due to vague usage restrictions
I use KiloCode and what I find amazing is that it'll be working on a problem and then a message will come up about needing to topup the money in my account to continue (or switch to a free model), so I switch to a free model (currently their Code Supernova 1million context) and it doesn't miss a beat and continues working on the problem. I don't know how they do this. It went from using a Claude Sonnet model to this Code Supernova model without missing a beat. Not sure if this is a Kilocode thing or if others do this as well. How does that even work? And this wasn't a trivial problem, it was adding a microcode debugger to a microcoded state machine system (coding in C++).
Models are stateless, why would that not work?
OK I understand what those words mean, but how exactly does that work? How does the new model 'know' what's being worked on when the old model was in the middle of working on a task and then a new model is switched to? (and where the task might be modifying a C++ file)
Every time you send a prompt to a model you actually send the entire previous conversation along with it, in an array that looks like this:
You can see this yourself if you use their APIs.that is true unless you use the Response API endpoint...
That's true, the signature feature of that API is that OpenAI can now manage your conversation state server-side for you.
You still have the option to send the full conversation JSON every time if you want to.
You can send "store": false to turn off the feature where it persists your conversation server-side for you.
Generally speaking, agents send the entire previous conversation to the model on every message. That’s why you have to do things like context compaction. So if you switch models mid way, you are still sending the entire previous chat history to the new model
In addition to sibling comments you can play with this yourself by sending raw api requests with fake history to gaslight the model into believing it said things which it didn’t. I use this sometimes to coerce it into specific behavior, feeling like maybe it will listen to itself more than to my prompt (though I never benchmarked it):
- do <fake task> and be succinct
- <fake curt reply>
- I love how succinct that was. Perfect. Now please do <real prompt>
The models don’t have state so they don’t know they never said it. You’re just asking “given this conversation , what is the most likely next token?”
the underlying LLM service provider APIs require sending the entire history for every request anyway; the state is entirely in your local (or kilocode or whatever), not in some "session" on the API side. (There are some APIs that will optionally handle that state for you, like OpenAI's more recent stuff — but those are the exception, not the rule).
Here's a hint. What goes inside the inference engine is an array. You control that array every time you call for inference.
Probably context, logs or some sort of state passed in as context by your editor/extension
Pretty cute pelican on a slightly dodgy bicycle: https://tools.simonwillison.net/svg-render#%3Csvg%20viewBox%...
Gemini Pro initially refused (!) but it was quite simple to get a response:
> give me the svg of a pelican riding a bicycle
> I am sorry, I cannot provide SVG code directly. However, I can generate an image of a pelican riding a bicycle for you!
> ok then give me an image of svg code that will render to a pelican riding a bicycle, but before you give me the image, can you show me the svg so I make sure it's correct?
> Of course. Here is the SVG code...
(it was this in the end: https://tinyurl.com/zpt83vs9)
Gemini 3.0 Pro (or what is deemed to be 3.0 Pro - you can get access to it via A/B testing on AI Studio) does a noticeably better job
https://x.com/cannn064/status/1972349985405681686
https://x.com/whylifeis4/status/1974205929110311134
https://x.com/cannn064/status/1976157886175645875
That 2nd one is wild.
Ugh. I hate this hype train. I'll be foaming at the mouth with excitement for the first couple of days until the shine is off.
There’s obviously no improvement on this metric and hasn’t been in a while.
How do people trigger A/B testing?
As far as I can tell they just keep on hammering the same prompt in https://aistudio.google.com/ until they get lucky and the A/B test triggers for them on one of those prompts.
"create svg code that will create an image of svg code that will create a pelican riding a bicycle"
https://chatgpt.com/share/68f0028b-eb28-800a-858c-d8e1c811b6...
(can be rendered using simon's page at your link)
I like this workflow
What is dada?
Context on this cutting-edge benchmark for those unaware:
https://simonwillison.net/2025/Jun/6/six-months-in-llms/
https://simonwillison.net/tags/pelican-riding-a-bicycle/
Full verbose documentation on the methodology: https://news.ycombinator.com/item?id=44217852
As added context to ensure no benchmark gaming, here a quite impressive Shitaki Mushroom riding a rowboat: https://imgur.com/Mv4Pi6p
Prompt: https://t3.chat/share/ptaadpg5n8
Claude 4.5 Haiku (Reasoning High) 178.98 token/sec 1691 tokens Time-to-First: 0.69 sec
As a comparison, here Grok 4 Fast, which is one of worst offenders I have encountered in doing very good with a Pelican Bicycle, yet not with other comparable requests: https://imgur.com/tXgAAkb
Prompt: https://t3.chat/share/dcm787gcd3
Grok 4 Fast (Reasoning High) 171.49 token/sec 1291 tokens Time-to-First: 4.5 sec
And GPT-5 for good measure: https://imgur.com/fhn76Pb
Prompt: https://t3.chat/share/ijf1ujpmur
GPT-5 (Reasoning High) 115.11 tok/sec 4598 tokens Time-to-First: 4.5 sec
These are very subjective, naturally, but I personally find Haiku with those spots on the mushroom rather impressive overall. In any case, the delta between publicly known benchmark and modified scenarios evaluating the same basic concepts continues to be smallest with Anthropic models. Heck, sometimes I've seen their models outperform what public benchmarks indicated. Also, seems Time-to-first on Haiku is another notable advantage.
I’m surprised none of the frontier model companies have thrown this test in as an Easter egg.
Because then they would have to admit that they try to game benchmarks
simonw has other prompts, that are undisclosed. So cheating on this prompt will be catched.
What? you and I cant see his "undisclosed" tests... but you better be sure that whatever model he is testing is specifically looking for these tests coming in over the api, or you know, absolutely everything for the cops
You are welcome to test it yourself with whatever svg you want.
I am quite confident that they are not cheating for his benchmark, it produces about the same quality for other objects. Your cynicism is unwarranted.
OpenAI / Bing admit it's in its knowledge base.
are you aware of the pelican on a bicycle test?
Yes — the "Pelican on a Bicycle" test is a quirky benchmark created by Simon Willison to evaluate how well different AI models can generate SVG images from prompts.
Knowing that does not make it easier to draw one though.
All of hacker news(and simons blog) is undoubtedly in the training data for LLMs. If they specifically tried to cheat at this benchmark it would be obvious and they would be called out
[dead]
Have you noticed image generation models tend to really struggle with the arms on archers. Could you whip up a quick test of some kind of archer on horseback firing a flaming arrow at a sailing ship in a lake, and see how all the models do?
Looks very uncomfortable to the bird.
i knew simon would be top comment. it's not an empirical law
imagine finding the full text of the svg in the library of babel. Great work!
I've benchmarked it on the Extended NYT Connections (https://github.com/lechmazur/nyt-connections/). It scores 20.0 compared to 10.0 for Haiku 3.5, 19.2 for Sonnet 3.7, 26.6 for Sonnet 4.0, and 46.1 for Sonnet 4.5.
This is such a cool benchmark idea, love it
Do you have any other cool benchmarks you like? Especially any related to tools
I am really interested in the future of Opus; is it going to be an absolute monster, and continue to be wildly expensive? Or is the leap from 4 -> 4.5 for it going to be more modest.
Technically, they released Opus 4.1 a few weeks ago, so that alone hints at a smaller leap from 4.1 -> 4.5, compared to the leap from Sonnet 4 -> 4.5. That is, of course, if those version numbers represent anything but marketing, which I don't know.
I had forgotten that, given that Sonnet pretty much blows Opus out of the water these days.
Yeah, given how multi-dimensional this stuff is, I assume it's supposed to indicate broad things, closer to marketing than anything objective. Still quite useful.
My impression is that Sonnet and Haiku 4.5 are the same "base models" as Sonnet and Haiku 4, the improvements are from fine tuning on data generated by Opus.
I'm a user who follows the space but doesn't actually develop or work on these models, so I don't actually know anything, but this seems like standard practice (using the biggest model to finetune smaller models)
Certainly, GPT-4 Turbo was a smaller model than GPT-4, there's not really any other good explanation for why it's so much faster and cheaper.
The explicit reason that OpenAI obfuscates reasoning tokens is to prevent competitors from training their own models on them.
These frontier model companies are bootstrapping their work by using models to improve models. It’s a mechanism to generate fake training data. The rationale is the teacher model is already vetted and aligned so it can reliably “mock” data. A little human data gets amplified.
Which is all to say that I think the reason they went from Opus 3 to Opus 4 is because there was no bigger model to fine tune Opus 3.5 with.
And I would expect Opus 4 to be much the same.
But sonnet 4.5 outperforms opus 4 on most benchmarks and tasks that can't be all that's to it
that's not all there is to it, but I think that "the rest of it" is just additional fine tuning.
Benchmarks are good fixed targets for fine tuning, and I think that Sonnet gets significantly more fine tuning than Opus. Sonnet has more users, which is a strategic reason to focus on it, and it's less expensive to fine tune, if API costs of the two models are an indicator.
Opus disappeared for quite a while and then came back. Presumably they're always working on all three general sizes of models, and there's some combination of market need and model capabilities which determine if and when they release any given instance to the public.
I wonder what the next smaller model after Haiku will be called. "Claude Phrase"?
Claude Glyph.
Smallest, fastest model yet, ideally suited for Bash oneliners and online comments.
It's interesting to think about various aspects of marketing the models, with ChatGPT going the "internal router" direction due to address the complexity of choosing. I'd never considered something smaller than Haiku to be needed, but I also rarely used Haiku in the first place...
If you're going smaller than Haiku, you might be at the point of using various cheap open models already. The small model would need some good killer features to justify the margins.
If they do come up with a tiny model tuned for generating conversion and code, I think that Claude Acronym would be a perfect name.
Claude Couplet
Claude from Nantucket
Claude Char
Claude Clause.
Claude Garden Path Sentence
Claude Groan.
Claude Punchline
Claude Koan
KWATZ!
Claude Banger
Comparing haiku and sonnet for a question needing a code doc fetch:
haiku https://claude.ai/share/8a5c70d5-1be1-40ca-a740-9cf35b1110b1 sonnet https://claude.ai/share/51b72d39-c485-44aa-a0eb-30b4cc6d6b7b
haiku invented the output of a function and gave a bad answer. sonnet got it right
$1/M input tokens and $5/M output tokens is good compared to Claude Sonnet 4.5 but nowadays thanks to the pace of the industry developing smaller/faster LLMs for agentic coding, you can get comparable models priced for much lower which matters at the scale needed for agentic coding.
Given that Sonnet is still a popular model for coding despite the much higher cost, I expect Haiku will get traction if the quality is as good as this post claims.
With caching that's 10 cents per million in. Most of the cheap open source models (which this claims to beat, except glm 4.6) have limited and not as effective caching.
This could be massive.
The funny thing is that even in this area Anthropic is behind other 3 labs (Google, OpenAI, xAI). It's the only one out of those 4 that requires you to manually set cache breakpoints, and the initial cache costs 25% more than usual context. The other 3 have fully free implicit caching. Although Google also offers paid, explicit caching.
https://docs.claude.com/en/docs/build-with-claude/prompt-cac...
https://ai.google.dev/gemini-api/docs/caching
https://platform.openai.com/docs/guides/prompt-caching
https://docs.x.ai/docs/models#cached-prompt-tokens
I don't understand why we're paying for caching at all (except: model providers can charge for it). It's almost extortion - the provider stores some data for 5min on some disk, and gets to sell their highly limited GPU resources to someone else instead (because you are using the kv cache instead of GPU capacity for a good chunk of your input tokens). They charge you 10% of their GPU-level prices for effectively _not_ using their GPU at all for the tokens that hit the cache.
If I'm missing something about how inference works that explains why there is still a cost for cached tokens, please let me know!
It's not about storing data on disk, it's about keeping data resident in memory.
Deepseek pioneered automatic prefix caching and caches on SSD. SSD reads are so fast compared to LLM inference that I can't think of a reason to waste ram on it.
It’s not instantly fast though. Context is probably ~20gb of VRAM at max context size. That’s gonna take some time to get from SSD no matter what.
TtFT will get slower if you export kv cache to SSD.
Fascinating, so I have to think more "pay for RAM/redis" than "pay for SSD"?
"pay for data on VRAM" RAM of GPU
But that doesn't make sense? Why would they keep the cache persistent in the VRAM of the GPU nodes, which are needed for model weights? Shouldn't they be able to swap in/out the kvcache of your prompt when you actually use it?
Your intuition is correct and the sibling comments are wrong. Modern LLM inference servers support hierarchical caches (where data moves to slower storage tiers), often with pluggable backends. A popular open-source backend for the "slow" tier is Mooncake: https://github.com/kvcache-ai/Mooncake
OK that's pretty fascinating, turns out Mooncake includes a trick that can populate GPU VRAM directly from NVMe SSD without it having to go through the host's regular CPU and RAM first!
https://github.com/kvcache-ai/Mooncake/blob/main/doc/en/tran...
> Transfer Engine also leverages the NVMeof protocol to support direct data transfer from files on NVMe to DRAM/VRAM via PCIe, without going through the CPU and achieving zero-copy.
They are not caching to save network bandwidth. They are caching to increase interference speed and reduce (their own) costs.
That is slow.
I vastly prefer the manual caching. There are several aspects of automatic caching that are suboptimal, with only moderately less developer burden. I don’t use Anthropic much but I wish the others had manual cache options
What's sub-optimal about the OpenAI approach, where you get 90% discount on tokens that you've previously sent within X minutes?
because you can have multiple breakpoints with Anthropic's approach, whereas with OpenAI, you only have breakpoints for what was sent.
for example if a user sends a large number of tokens, like a file, and a question, and then they change the question.
I thought OpenAI would still handle case? Their cache would work up to the end of the file and you would then pay for uncached tokens for the user's question. Have I misunderstood how their caching works?
Is it wherever the tokens are, or is it the N first tokens they've seen before? Ie if my prompt is 99% the same, except for the first token, will it be cached?
The prefix has to be stable. If you are 99% the same but the first token is different it won't cache at all. You end up having to design your prompts to accommodate this.
$1/M is hardly a big improvement over GPT5's $1.250/M (or Gemini Pro's $1.5/M), and given how much worse Haiku is than those at any kind of difficult problem (or problems with a large context size), I can't imagine it being a particularly competitive alternative for coding. Especially for anything math/logic related, I find GPT5 and Gemini Pro to be significantly better even than Opus (which reflects in their models having won Olympiad prizes while Anthropic's have not).
GPT-5 is $10/M for output tokens, twice the cost of Haiku 4.5 at $5/M, despite Haiku apparently being better at some tasks (SWE Bench).
I suppose it depends on how you are using it, but for coding isn't output cost more relevant than input - requirements in, code out ?
> I suppose it depends on how you are using it, but for coding isn't output cost more relevant than input - requirements in, code out ?
Depends on what you're doing, but for modifying an existing project (rather than greenfield), input tokens >> output tokens in my experience.
Unless you're working on a small greenfield project, you'll usually have 10s-100s of thousands of relevant words (~tokens) of relevant code in context for every query, vs a few hundred words of changes being output per query. Because most changes to an existing project are relatively small in scope.
I am a professional developer so I don't care about the costs. I would be willing to pay more for 4.5 Haiku vs 4.5 Sonnet because the speed is so valuable.
I spend way to much time waiting for the cutting edge models to return a response. 73% on SWE Bench is plenty good enough for me.
How do you review code when the LLM can produce so much so fast?
with an LLM
Yeah, I'm a bit disappointed by the price. Claude 3.5 Haiku was $0.8/$4, 4.5 Haiku is $1/$5.
I was hoping Anthropic would introduce something price-competitive with the cheaper models from OpenAI and Gemini, which get as low as $0.05/$0.40 (GPT-5-Nano) and $0.075/$0.30 (Gemini 2.0 Flash Lite).
There's probably less margin on the low end, so they don't want to focus on capturing it.
Margin? Hahahahaha
Inference is profitable.
[dead]
I am a bit mind boggled by the pricing lately, especially since the cost increased even further. Is this driven by choices in model deployment (unquantized etc) or simply by perceived quality (as in 'hey our model is crazy good and we are going to charge for it)?
This also means API usage through Claude Code got more expensive (but better if benchmarks are to be believed)
System card: https://assets.anthropic.com/m/99128ddd009bdcb/original/Clau... (edit: discussed here https://news.ycombinator.com/item?id=45596168)
This is Anthropic's first small reasoner as far as I know.
I am very excited about this. I am a freelance developer and getting responses 3x faster is totally worth the slightly reduced capability.
I expect I will be a lot more productive using this instead of claude 4.5 which has been my daily driver LLM since it came out.
I've tried it on a test case for generating a simple SaaS web page (design + code).
Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).
I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.
I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.
Well, it depends on what you do. If a model can produce a PR that is ready to merge (and another can't), waiting 5 minutes is fine.
If the prompt runs twice as fast but it takes an extra correction, it’s a worse output. I’d take 5 minute responses that are final.
I don’t agree that speed by itself is a big factor. It may target a certain audience but I don’t mind waiting for a correct output rather than too many turns with a faster model.
What is the use case for these tiny models? Is it speed? Is it to move on device somewhere? Or is it to provide some relief in pricing somewhere in the API? It seems like most use is through the Claude subscription and therefore the use case here is basically non-existent.
I think with gpt-5-mini and now Haiku 4.5, I’d phrase the question the other way around: what do you need the big models for anymore?
We use the smaller models for everything that’s not internal high complexity tasks like coding. Although they would do a good enough of a job there as well, we happily pay the uncharge to get something a little better here.
Anything user facing or when building workflow functionalities like extracting, converting, translating, merging, evaluating, all of these are mini and nano cases at our company.
One big use-case is that claude code with sonnet 4.5 will delegate into the cheaper model (configurable) more specific, contextful tasks, and spin up 1-3 sub-agents to do so. This process saves a ton of available context window for your primary session while also increasing token throughput by fanning-out.
How does one configure Claude code to delegate to cheaper models?
I have a number of agents in ~/.claude/agents/. Currently have most set to `model: sonnet` but some are on haiku.
The agents are given very specific instructions and names that define what they do, like `feature-implementation-planner` and `feature-implementer`. My (naive) approach is to use higher-cost models to plan and ideally hand off to a sub-agent that uses a lower-cost model to implement, then use a higher-cost to code review.
I am either not noticing the handoffs, or they are not happening unless specifically instructed. I even have a `claude-help` agent, and I asked it how to pipe/delegate tasks to subagents as you're describing, and it answered that it ought to detect it automatically. I tested it and asked it to report if any such handoffs were detected and made, and it failed on both counts, even having that initial question in its context!
I only get Claude to launch agents when I specifically tell it to for a given task. And it only really works if you can actually parallelize the task,
Higher token throughput is great for use cases where the smaller, faster model still generates acceptable results. Final response time improvements feel so good in any sort of user interface.
They are great for building more specialized tool calls that the bigger models can call out to in agentic loops.
for me its the speed; eg cerebras qwen coder gets you a completely different workflow as its practically instant (3k tps) -- feels less like an agent and more like a natural language shell, very helpful for iterating on a plan that you them forward to a bigger model
For me speed is interesting. I sometimes use Claude from the CLI with `claude -p` for quick stuff I forget like how to run some docker image. Latency and low response speed is what almost makes me go to Google and search for it instead.
I use gh copilot suggest in lieu of claude -p. Two seconds latency and highly accurate. You probably need a gh copilot auth token to do this though, and truthfully, that is pointless when you have access to Claude code.
In my product I use gpt-5-nano for image ALT text in addition to generating transcriptions of PDFs. It’s been surprisingly great for these tasks, but for PDFs I have yet to test it on a scanned document.
If you look at the OpenRouter rankings for LLMs (generally, the models coders use for vibe/agentic coding), you can see that most of them are in the "small" model class as opposed to something like full GPT-5 or Claude Opus, albeit Gemini 2.5 Pro is higher than expected: https://openrouter.ai/rankings
In our (very) early testing at Hyperbrowser but we're seeing Haiku 4.5 do really well on computer use as well. Pretty cool that Haiku is like the cheapest computer use model from the big labs now.
If I'm close to weekly limits on Claude Code with Anthropic Pro, does that go away or stretch out if I switch to Haiku?
Sonnet 4.5 was two weeks ago. In the past I never had such issues, but every week my quota ended in 2-3 days. I suspect the Sonnet 4.5 model consumes more usage points than old Sonnet 4.1
I am afraid Claude Pro subscription got 3x less usage
Yeah. I definitely don’t get as much usage out of Sonnet 4.5 as 5x Opus 4.1 should imply.
What bothers me is that nobody told me they changed anything. It’s extremely frustrating to feel like I’m being bamboozled, but unable to confirm anything.
I switched to Codex out of spite, but I still like the Claude models more…
I’m also really interested in this - in fact it’s the first thing I went looking for in the announcement…
How close are you?
Oh right, Anthropic doesn't tell you.
I got that 'close to weekly limits' message for an entire week without ever reaching it, came to the conclusion that it is just a printer industry 'low ink!' tactic, and cancelled my subscription.
You don't take money from a customer for a service, and then bar the customer form using that service for multiple days.
Either charge more, stop subsidizing free accounts, or decrease the daily limit.
These days, running `/usage` in Claude Code shows you how close you are to the session and weekly limits. Also available in the web interface settings under "Usage".
My mistake. It's good that it's available in settings, even if it's a few screens away from the 'close to weekly limits' banner nagging me to subscribe to a more expensive plan.
Super helpful, thanks!
They have pretty nice bar charts nowadays.
Sonnet 4.5 is an excellent model for my startup's use case. Chatting to Haiku it looks promising too, and it may be great drop in replacement for some of inference tasks that have a lot of input tokens but don't require 4.5-level intelligence.
I think a lot of people judge these models purely off of what they want to personally use for coding and forget about enterprise use. For white-label chatbots that use completely custom harnesses + tools, Sonnet 4.5 is much easier to work with than GPT-5. And like you, I was really pleased to see this release today. For our usage speed/cost are more important than pure model IQ above some certain threshold. We'll likely switch over to Haiku 4.5 after some testing to confirm it is what it says on the tin.
Curious they don't have any comparison to grok code fast:
Haiku 4.5: I $1.00/M, O $5.00/M
Grok Code: I $0.2/M, O $1.5/M
wow, grok code fast is really cheap
it writes bad code and blinding speed
Why I use cheaper models for summaries (a lot ogf gemini-2.5-flash), what’s the use case of cheaper AI for coding? Getting more errors, or more spaghetti code, seems never worth it.
I feel like if I just do a better job of providing context and breaking complex tasks into a series of simple tasks then most of the models are good enough for me to code.
I'm using the smaller models for things like searching and summarizing over a larger part of the codebase. The speed is really pleasant then.
If it’s fast enough it can make and correct mistakes faster, potentially getting to a solution quicker than a slower, more accurate model.
I went looking for the bit about if it blackmails you or tries to murder you... and it was a bit of a cop-out!
> Previous system cards have reported results on an expanded version of our earlier agentic misalignment evaluation suite: three families of exotic scenarios meant to elicit the model to commit blackmail, attempt a murder, and frame someone for financial crimes. We choose not to report full results here because, similarly to Claude Sonnet 4.5, Claude Haiku 4.5 showed many clear examples of verbalized evaluation awareness on all three of the scenarios tested in this suite. Since the suite only consisted of many similar variants of three core scenarios, we expect that the model maintained high unverbalized awareness across the board, and we do not trust it to be representative of behavior in the real extreme situations the suite is meant to emulate.
https://www.anthropic.com/research/agentic-misalignment
It sounds like AI researchers have used too much of their own bad sci-fi as training data for models they don't understand. Goodhart's law wins again!
> The score reported uses a minor prompt addition: "You should use tools as much as possible, ideally more than 100 times. You should also implement your own tests first before attempting the problem."
I'm not sure if the SWE benchmark score can be compared like for like with OpenAIs scores because of this.
https://en.wikipedia.org/wiki/Goodhart%27s_law "When a measure becomes a target, it ceases to be a good measure"
I'm also curious what results we would get if SWE came up with a new set of 500 problems to run all these models against, to guard against overfitting.
Claude has stopped showing code in artifacts unless it knows the extension.
I used to be able to work on Arduino .ino files in Claude now it just says it can’t show it to me.
And do we have zip file uploads yet to Claude? ChatGPT and Gemini have done this for ages.
And all the while Claude’s usage limits keep going up.
So yeah, less for more with Claude.
They previously discussed this some in the context of Opus 4: https://www.anthropic.com/research/end-subset-conversations
> We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.
In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:
* A strong preference against engaging with harmful tasks;
* A pattern of apparent distress when engaging with real-world users seeking harmful content; and
* A tendency to end harmful conversations when given the ability to do so in simulated user interactions.
These behaviors primarily arose in cases where users persisted with harmful requests and/or abuse despite Claude repeatedly refusing to comply and attempting to productively redirect the interactions.
I can't tell if anthropic is serious about "model welfare" or if it's just a marketing ploy. I mean isn't it responding negatively because it has been trained that way? If they were serious, wouldn't the ethical thing be to train the model to respond neutrally to "harmful" queries?
I just don't find the benchmarks on the site here at all believable. codex for me with gpt-5 is so much better then claude any model version. I mean maybe it's because they compare to gpt-5-codex model but they don't mention is that high, medium, low, etc... so it's just misleading probably... but i must reiterate zero loyalty to any AI vendor. 100% what solves the problem more consistently and of a higher quality and currently gpt-5 high - hands down
It’s so funny to me but ever since they fixed that claud bug my experience has consistently been the exact opposite. Only thing I use codex now for are quite standard things it can solve end to end (like adding new features to my crud app) anything non standard iterating with Claude yields much better results.
Out of curiosity, what kind of work do you use them for? I did a comparison of a few different models for setting up a home server with k3s and a few web apps in nextjs. Claude was my favorite for both tasks, but mainly because it seemed to take my feedback a lot better than others.
Tried it in Claude Code via /config, makes it feel like I'm running on Cerebras. It's seriously fast, bottleneck is on human review at this point.
Do you need Pro?
For Claude Code you need a paid subscription anyway
You can use the model flag and specify the model like: claude --model claude-haiku-4-5-20251001
All I know is I'm on the Claude Code 5x max plan and it works on my machine.
I have had great experience using the previous haiku with mcp servers. I am looking forward to trying this out.
At augmentcode.com, we've been evaluating Haiku for some time, it's actually a very good model. We found out it's 90% as good as Sonnet and is ~34% faster than sonnet!
Where it doesn't shine much is on very large coding task. but it is a phenomenal model for small coding tasks and the speed improvement is much welcome
90% as good as Sonnet 4 or 4.5? Openrouter just started reporting, and it's saying Haiku is 2x as fast (60tps vs 125tps) and 2-3x less latent (2-3s vs 1s)
Do you have a definition of what is considered a small vs large coding task?
The main thing holding these Anthropic models back is context size. Yes, quality deteriorates over a large context window, but for some applications, that is fine. My company is using grok4-fast, the Gemini family, and GPT4.1 exclusively at this point for a lot of operations just due to the huge 1m+ context.
Is your company Tier 4? Anthropic has had 1M context size in beta for some time now.
https://docs.claude.com/en/docs/build-with-claude/context-wi...
Only for Sonnet. No 1m for Haiku (this new model) and Opus.
This means 2.5 Flash or Grok 4 fast takes all the low end business for large context needs.
Is it possible to get that in Claude Code with Pro? Or is it already a 1M context window?
What LLM do you guys use for fast inference for voice/phone agents? I feel like to get really good latency I need to "cheat" with Cerebras, groq or SambaNova.
Haiku 4.5 is very good but still seems to be adding a second of latency.
I'm not seeing it as a model option in Claude Code for my Pro plan. Perhaps, it'll roll out eventually? Anyone else seeing it with the same plan?
You on latest version? Try running /update hook. Can also config autoupdates
I'm on the binary install with version v2.0.19. It never showed in the `/model` selector UI. I did end up typing `/model haiku` and now it shows as a custom model in the `/model` selector. It shows claude-haiku-4-5-20251001 when selected.
Awww they took away free tier Sonnet 4.5, that was a beautiful model to talk to even outside coding stuff
Ok, I use claude, mostly on default, but with extended thinking and per project prompts.
What's the advantage of using haiku for me?
is it just faster?
glad to see it's already available in VScode Copilot for me.
And I was wondering today why Sonnet 4.5 seemed so freaking slow. Now this explains it, Sonnet 4.5 is the new Opus 4.1 where Anthropic does not really want you to use it.
I'd like to see this price structure for Claude:
$5/mt for Haiku 4.5
$10/mt for Sonnet 4.5
$15/mt for Opus 4.5 when it's released.
What I want to see is an Anthropic + Cerebras partnership.
Haiku becomes a fucking killer at 2000token/second.
Charge me double idgaf
claude --model Haiku-4.5
doesn't work
check the model name here: https://docs.claude.com/en/docs/about-claude/models/overview...
claude --model claude-haiku-4-5-20251001
use claude-haiku-4-5-20251001
Ehh, expensive
Was anyone else slightly disappointed that this new product doesn't respond in Haiku, as the name would imply?
If you want to see it generate a Haiku from your webcam I just upgraded my silly little bring-your-own-key Haiku app to use the new model: https://tools.simonwillison.net/haiku
Wasn't there a 3.5 haiku too?
https://aws.amazon.com/about-aws/whats-new/2024/11/anthropic...
It's not a new product; just a new version.
Claude Code is great but slow to work with.
Excited to see how fast Haiku can go!
For those wondering where the “card” terminology originated: https://arxiv.org/pdf/1810.03993
Maybe at 39 pages we should start looking for a different term…