321 points by behnamoh 6 days ago | 175 comments | View on ycombinator
ctoth 6 days ago |
nnutter 6 days ago |
jimmydoe 6 days ago |
Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.
d4rkp4ttern 6 days ago |
“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”
This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.
The answer is no, the above was a response to a follow up.
An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.
falloutx 6 days ago |
otikik 6 days ago |
Lars147 6 days ago |
llmslave3 6 days ago |
How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.
throwaw12 6 days ago |
Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code
pton_xd 6 days ago |
pjmlp 6 days ago |
Some people never learn from history, it seems.
akomtu 6 days ago |
Imustaskforhelp 6 days ago |
This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.
narmiouh 6 days ago |
If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict
zingar 6 days ago |
Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?
VoxPelli 6 days ago |
I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.
We didn't sign.
bionhoward 6 days ago |
Centigonal 5 days ago |
Not a very hacker-friendly strategy, but Apple's market cap is pretty big. I think it comes down to whether Anthropic can make a product with enough of a lead over competitors to offset the restrictions.
with 6 days ago |
The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.
(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)
mcintyre1994 6 days ago |
This tweet reads as nonsense to me
It's quoting:
> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.
And the linked tweet says that such integration is against their terms.
The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.
I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.
ChrisArchitect 6 days ago |
Anthropic blocks third-party use of Claude Code subscriptions
sharat87 6 days ago |
Which, seems fine? They could've just not offered the 200$ plan and perhaps nobody would've complained. They tried it, noticed it being unsustainable, so they're trying to remodel it to it _is_ sustainable.
I think the upset is misplaced. :shrug:
throw1235435 6 days ago |
Claude code making itself obsolete is banned.
dev_l1x_be 6 days ago |
undefined 5 days ago |
mmaunder 6 days ago |
zkmon 6 days ago |
lobito25 6 days ago |
afinlayson 6 days ago |
PeterStuer 5 days ago |
slowmovintarget 6 days ago |
Anthropic has just entered the "for laying down and avoiding" category.
mmaunder 6 days ago |
Footprint0521 6 days ago |
Yizahi 6 days ago |
oxag3n 6 days ago |
Can they sue maintainers?
bastawhiz 6 days ago |
1. Pay for a stock photo library and train an image model with it that I then sell.
2. Use a spam detection service, train a model on its output, then sell that model as a competitor.
3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.
This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.
There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."
pnathan 6 days ago |
FpUser 6 days ago |
newaccount1000 6 days ago |
ronbenton 6 days ago |
orochimaaru 6 days ago |
shmerl 6 days ago |
miohtama 6 days ago |
insin 6 days ago |
hthrgnr1 3 days ago |
starkeeper 6 days ago |
wilg 6 days ago |
I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.
Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.
You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?
From the thread:
> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.