Rendered at 11:48:37 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
andsoitis 18 hours ago [-]
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).
Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
marssaxman 18 hours ago [-]
The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.
There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
raincole 17 hours ago [-]
> every AI coding bot will learn your new language
If there are millions of lines on github in your language.
Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.
Maxatar 15 hours ago [-]
I have not found this to be the case. My company has some proprietary DSLs we use and we can provide the spec of the language with examples and it manages to pick up on it and use it in a very idiomatic manner. The total context needed is 41k tokens. That's not trivial but it's also not that much, especially with ChatGPT Codex and Gemini now providing context lengths of 1 million tokens. Claude Code is very likely to soon offer 1 million tokens as well and by this time next year I wouldn't be surprised if we reach context windows 2-4x that amount.
The vast majority of tokens are not used for documentation or reference material but rather are for reasoning/thinking. Unless you somehow design a programming language that is just so drastically different than anything that currently exists, you can safely bet that LLMs will pick them up with relative ease.
joshstrange 14 hours ago [-]
> Claude Code is very likely to soon offer 1 million tokens as well
You can do it today if you are willing to pay (API or on top of your subscription) [0]
> The 1M context window is currently in beta. Features, pricing, and availability may change.
> Extended context is available for:
> API and pay-as-you-go users: full access to 1M context
> Pro, Max, Teams, and Enterprise subscribers: available with extra usage enabled
> Selecting a 1M model does not immediately change billing. Your session uses standard rates until it exceeds 200K tokens of context. Beyond 200K tokens, requests are charged at long-context pricing with dedicated rate limits. For subscribers, tokens beyond 200K are billed as extra usage rather than through the subscription.
That’s not true. I’m working on a language and LLMs have no problems writing code in it even if there exists ~200 lines of code in the language and all of them are in my repo.
calvinmorrison 17 hours ago [-]
Uh not really. I am already having Claude read and then one-shot proprietary ERP code written in vintage closed source language OOP oriented BASIC with sparse documentation.... just needed to feed it in the millions of lines of code i have and it works.
jonfw 15 hours ago [-]
I'm sure claude does great at that, but it would be objectively better, for a large variety of reasons, if claude didn't have to keep syntax examples in it's context.
calvinmorrison 14 hours ago [-]
for sure. About 6 months ago it absolutely couldn't do it and kept getting cofnused even when i tried to do RAG against the manuals provided (only downloadable from a shady .ru site LOL) but now .. like butter. The context seems to mostly be it reading and writing related stuff?
vrighter 17 hours ago [-]
"i haven't been able to find much" != "there isn't much on the entire internet fed into them"
UncleOxidant 18 hours ago [-]
> but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?
clickety_clack 18 hours ago [-]
Agreed - unpopular languages and packages have pretty shaky outcomes with code generation, even ones that have been around since before 2023.
almog 17 hours ago [-]
Neither RAG nor loading the docs into the context window would produce any effective results. Not even including the grammar files and just few examples in the training set would help. To get any usable results you still need many many usage examples.
fcatalan 15 hours ago [-]
My own 100% hallucinated language experiment is very very weird and still has thousands of lines of generated examples that work fine. When doing complex stuff you could see the agent bounce against the tests here and there, but never produced non-working code in the end. The only examples available were those it had generated itself as it made up the language.
It was capable of making things like a JSON parser/encoder, a TODO webapp or a command line kanban tracker for itself in one shot.
marssaxman 16 hours ago [-]
And yet it works well enough, regardless. I have a little project which defines a new DSL. The only documentation or examples which exist for this little language, anywhere in the world, are on my laptop. There is certainly nothing in any AI's training data about it. And yet: codex has no trouble reading my repo, understanding how my DSL works, and generating code written in this novel language.
danielvaughn 18 hours ago [-]
In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.
Insanity 17 hours ago [-]
That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.
There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.
danielvaughn 16 hours ago [-]
I'm not an expert in LLMs, but I don't think character length matters. Text is deterministically tokenized into byte sequences before being fed as context to the LLM, so in theory `mySuperLongVariableName` uses the same number of tokens as `a`. Happy to be corrected here.
fragmede 4 hours ago [-]
Running it through https://platform.openai.com/tokenizer "mySuperLongVariableName" takes 5 tokens. "a", takes 1. mediumvarname is 3 though. "though" is 1.
coderenegade 13 hours ago [-]
You're more likely to save tokens in the architecture than the language. A clean, extensible architecture will communicate intent more clearly, require fewer searches through the codebase, and take up less of the context window.
gf000 17 hours ago [-]
Go is one of the most verbose mainstream programming languages, so that's a pretty terrible example.
Insanity 16 hours ago [-]
Maybe not a perfect example but it’s more lightweight than Java at least haha
gf000 16 hours ago [-]
If by lightweight you mean verbosity, then absolutely no.
In go every third line is a noisy if err check.
LtWorf 17 hours ago [-]
Well LLMs are made to be extremely verbose so it's a good match!
nineteen999 15 hours ago [-]
I think there's a huge range here - ChatGPT to me seems extra verbose on the web version, but when running with Codex it seems extra terse.
Claude seems more consistently _concise_ to me, both in web and cli versions.
But who knows, after 12 months of stuff it could be me who is hallucinating...
giancarlostoro 16 hours ago [-]
To you maybe, but Go is running a large amount of internet infrastructure today.
gf000 16 hours ago [-]
How does that relate to Go being a verbose language?
giancarlostoro 15 hours ago [-]
Its not verbose to some of us. It is explicit in what it does, meaning I don't have to wonder if there's syntatic sugar hiding intent. Drastically more minimal than equivalent code in other languages.
gf000 15 hours ago [-]
Verbosity is an objective metric.
Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).
For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).
giancarlostoro 14 hours ago [-]
Every time I hear complaints about error handling, I wonder if people have next to no try catch blocks or if they just do magic to hide that detail away in other languages? Because I still have to do error handling in other languages roughly the same? Am I missing something?
gf000 5 hours ago [-]
Exceptions travel up the stack on their own. Given that most error cases can't be handled immediately locally (otherwise it would be handled already and not return an error), but higher up (e.g. a web server deciding to return an error code) exceptions will save you a lot of boilerplate, you only have the throw at the source and the catch at the handler.
Meanwhile Go will have some boilerplate at every single level
Errors as values can be made ergonomic, there is the FP-heavy monadic solution with `do`, or just some macro like Rust. Go has none of these.
thunky 13 hours ago [-]
Lots of non-go code out there on the Internet if you ever decide you want to take a look.
politician 13 hours ago [-]
You’re not missing anything. I’ve worked with many developers that are clueless about error handling; who treat it as a mostly optional side quest. It’s not surprising that folks sees the explicit error handling in Go as a grotesque interruption of the happy path.
jurgenburgen 4 hours ago [-]
That’s a pretty defensive take.
You don’t have to hate Go to agree that Rust’s `?` operator is much nicer when all you want to do is propagate the error.
idiotsecant 17 hours ago [-]
I think I remember seeing research right here on HN that terse languages don't actually help all that much
thomasmg 17 hours ago [-]
I would be very interested in this research... I'm trying to write a language that is simple and concise like Python, but fast and statically typed. My gutfeeling is that more concise than Python (J, K, or some code golfing language) is bad for readability, but so is the verbosity of Rust, Zig, Java.
Those constraints can be enforced by a library too. Even humans sometimes make a whole new language for something that can be a function library. If you want strong correctness guarantees, check the structure of the library calls.
Programming languages function in large parts as inductive biases for humans. They expose certain domain symmetries and guide the programmer towards certain patterns. They do the same for LLMs, but with current AI tech, unless you're standing up your own RL pipeline, you're not going to be able to get it to grok your new language as well as an existing one. Your chances are better asking it to understand a library.
imiric 17 hours ago [-]
> every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
How will it "learn" anything if the only available training data is on a single website?
LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.
Zak 17 hours ago [-]
They don't understand anything, but they sure can repeat a pattern.
I'm using Claude Code to work on something involving a declarative UI DSL that wraps a very imperative API. Its first pass at adding a new component required imperative management of that component's state. Without that implementation in context, I told Claude the imperative pattern "sucks" and asked for an improvement just to see how far that would get me.
A human developer familiar with the codebase would easily understand the problem and add some basic state management to the DSL's support for that component. I won't pretend Claude understood, but it matched the pattern and generated the result I wanted.
This does suggest to me that a language spec and a handful of samples is enough to get it to produce useful results.
dmd 16 hours ago [-]
It's wild to me the disconnect between people who actually use these tools every day and people who don't.
I have done exactly the above with great success. I work with a weird proprietary esolang sometimes that I like, and the only documentation - or code - that exists for it is on my computer. I load that documentation in, and it works just fine and writes pretty decent code in my esolang.
"But that can't possibly work [based on my misunderstanding of how LLMs work]!" you say.
Well, it does, so clearly you misunderstand how they work.
ModernMech 16 hours ago [-]
The reason it works so well is that everyone’s “personal unique language” really isn’t all that different from what’s been proposed before, and any semantic differences are probably not novel. If you make your language C + transactional memory, the LLM probably has enough information about both to reason about your code without having to be trained on a billion lines.
Probably if you’re trying to be esoteric and arcane then yeah, you might have trouble, but that’s not normally how languages evolve.
dmd 16 hours ago [-]
No, mine's a esoteric declarative data description/transform language. It's pretty damn weird.
wizzwizz4 15 hours ago [-]
You may underestimate the weirdness of existing declarative data transformation languages. On a scale of 1 to 10, XSLT is about a 2 or 3.
When you say "weird" you mean "different from mainstream languages", but the exact way in which your language is weird (declarative data description/transformation) is probably exactly where languages will be going in the future because of how well-suited they are for LLM reading and writing. Those languages expose the structure of the computation directly such as data shapes and the relationships that transform them, rather than burying intent inside control flow.
With more explicit types and dataflow information, the model doesn't need to simulate execution (something LLMs are particularly bad at) as much as recognize and extend a transformation graph (something LLMs are particularly good at). So it's probably just that your particularly weird language is particularly well-adapted to LLM technology.
imiric 15 hours ago [-]
My comment is based precisely on using these tools frequently, if not daily, so what's wild is you assuming I don't.
The impact that lack of training data has on the quality of the results is easily observable. Try getting them to maintain a Python codebase vs. e.g. an Elixir one. Not just generate short snippets of code, but actually assist in maintaining it. You'll constantly run into basic issues like invalid syntax, missing references, use of nonexistent APIs, etc., not to mention more functional problems like dead, useless, or unnecessarily complicated code. I run into these things with mainstream languages (Go, Python, Clojure), so I don't see how an esolang could possibly fair any better.
But then again, the definitions of "just fine" and "decent" are subjective, and these tools are inherently unreliable, which is where I suspect the large disconnect in our experiences comes from.
michaelbrave 2 hours ago [-]
a few months back I had a similar thought and started working on a language that was really verbose and human readable, think Cobal with influences from Swift. The core idea was that this would be a business language that business people would/could read if they needed to, so it could be used for financial and similar use cases, with built in logic engines similar to Prolog or Mercury. My idea was that once the language starts being coded by AI there are two directions to go, either we max efficiency and speed (basically let the AI code in assembly) or we lean the other way and max it for human error checking and clear outputs on how a process flows, so my theory was headed more in that direction. But of course I failed, I'd never made a programming language before (I've coded a long time, but that's not the same thing) and the AI's at the time combined with my lack of knowledge caused a spectacular failure. I still think my theory is correct though, especially if we want to see financial or business logic, having the code be more human readable to check for problems when even not a technical person, I still see a future where that is useful.
voxleone 17 hours ago [-]
In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.
Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.
One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.
abraxas 17 hours ago [-]
I agree with the sentiment but want to point out that the biggest drive behind UML was the enrichment of Rational Software and its founders. I doubt anyone ever succeeded in implementing anything useful with Rational Rose. But the Rational guys did have a phenomenal exit and that's probably the biggest success story of UML.
I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.
spelunker 18 hours ago [-]
Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?
Fnoord 12 hours ago [-]
Without proper attribution, it seems more fair to say copyright infringement occurred, on a massive scale if I may add. The burden of proof lies at the owners of the LLM. Which is why, if you do not want a blackbox, you want training data to be properly specified. That ain't happening now because of the skeletons in the closet.
idiotsecant 17 hours ago [-]
I think the only hope is that AGI arises and picks up where humanity left off. Otherwise I think this is the long dark teatime of human engineering of all sorts.
tartoran 15 hours ago [-]
So you’re hoping for a blackbox uninspectable by humans? That to me sounds like a nightmare, a nightmare worse than all the cruft and stupid rules humanity accrued over time. Let’s hope the future tech is inspectable and understandable by humans.
idiotsecant 14 hours ago [-]
I think if we assume that AGI will be a thing the odds of future tech remaining inspectable by humans is pretty unlikely. Would you build a car so that your dog can maintain it?
tartoran 7 hours ago [-]
Fully understandable end to end by any normal human and inspectable enough for human governance are different things. In any sane world, AGI would be built inside a human institutional environment: laws, audits, liability, safety engineering, access controls, operational constraints, etc. We do not build planes so passengers can reconstruct the turbine from scratch, but we still require them to be inspectable by the people responsible for certifying/repairing them. The right standard is not whether an average person can rebuild or fully undestand the whole machine, but whether human institutions can reliably inspect, verify and govern it. If they can’t, then the technology is not mature enough to trust.
_aavaa_ 18 hours ago [-]
I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.
phn 18 hours ago [-]
A good example of this is Rust. Rust is by default memory safe when compared to say, C, at the expense of you having to be deliberate in managing memory. With LLMs this equation changes significantly because that harder/more verbose code is being written by the LLM, so it won't slow you down nearly as much. Even better, the LLM can interact with the compiler if something is not exactly as it should.
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
munksbeer 9 hours ago [-]
I was under the impression from Rust developers that it was one of the languages LLMs struggled with a bit more than others? My view could be (probably is) very outdated.
jetbalsa 18 hours ago [-]
That is why Typescript is the main one used by most people vibe coding, The LLMs do like to work around the type engine in it sometimes, but strong typing and linting can help a ton in it.
onlyrealcuzzo 18 hours ago [-]
> Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:
1) It maximizes local reasoning and minimizes global complexity
2) It makes the vast majority of bugs / illegal states impossible to represent
3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)
4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)
The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
idiotsecant 17 hours ago [-]
How does a programming language prevent the vast majority of bugs? I feel like we would all be using that language!
Chaosvex 5 hours ago [-]
How? That's easy. You just need a huge dollop of hubris.
onlyrealcuzzo 16 hours ago [-]
See Rust with Use-after-Free, fearless concurrency, etc.
My language is a step ahead of Rust, but not as strict as Ada, while being easier to read than Swift (especially where concurrency is involved).
gf000 17 hours ago [-]
I agree with your questioning of it being capable of preventing bugs, but your second point is quite likely false -- we have developed a bunch of very useful abstractions in "research" languages 50 years ago, only to re-discover them today (no null, algebraic data types, pattern matching, etc).
johnfn 18 hours ago [-]
> If you’re not writing or reading it, the language, by definition doesn’t matter.
By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.
johnbender 18 hours ago [-]
In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it remains important they can read and understand what has been generated.
koolala 18 hours ago [-]
Saves tokens. The main reason though is to manage performance for what techniques get used for specific use cases. In their case it seems to be about expressiveness in Bash.
andyfilms1 18 hours ago [-]
I've been wondering if a diffusion model could just generate software as binary that could be fed directly into memory.
entropie 18 hours ago [-]
Yeah, what could go wrong.
eatsyourtacos 16 hours ago [-]
I have been building a game via a separate game logic library and Unity (which includes that independent library).. let's just say that over the last couple weeks I have 100% lost the need to do the coding myself. I keep iterating and have it improve and there are hundreds of unit tests.. I have a Unity MCP and it does 95% of the Unity work for me. Of course the real game will need custom designing and all that; but in terms of getting a complete prototype setup.... I am literally no longer the coder. I just did in a week what it would have taken me months and months and months to do. Granted Unity is still somewhat new to, but still.. even if you are an expert- it can immediately look at all your game objects and detect issues etc.
So yeah for some things we are already at the point of "I am not longer the coder, I am the architect".. and it's scary.
nineteen999 15 hours ago [-]
100% same experience with Claude and Unreal Engine 5 over here. And as the game moves from "less scaffolding" towards "more code", Claude actually is getting better at one-shotting things than it ever was - probably due to there being a lot more examples in the codebase of how to handle things under different scenarious (world compositing, multiplayer etc etc).
lionkor 17 minutes ago [-]
If I wanted
a vibe coded
programming language,
I would ask my LLM. Not go on HN.
gopalv 16 hours ago [-]
> More addictive than that is the unpredictability and randomness inherent to these tools. If you throw a problem at Claude, you can never tell what it will come up with. It could one-shot a difficult problem you’ve been stuck on for weeks, or it could make a huge mess. Just like a slot machine, you can never tell what might happen. That creates a strong urge to try using it for everything all the time.
That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].
The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.
Sure it made a mistake, but it is right there, you could go again.
Pull the lever, doesn't matter if the kids have Karate at 8 AM.
> The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.
If you can write a blogpost for this i'd like to read it.
bobjordan 17 hours ago [-]
I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.
That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.
At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.
wcarss 16 hours ago [-]
This is the take, very well said. I've been trying to use analogies with cars and cabinet making, but building a house is just right for the scale and complexity of the efforts enabled, and the ownership idea threads into it well.
Going into the vault!
anonnon 14 hours ago [-]
> Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.
I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.
thunky 12 hours ago [-]
Odd choice of a comment to post this reply to.
I guess you're just not going to believe what anyone says.
anonnon 12 hours ago [-]
> Odd choice of a comment to post this reply to.
How? They claimed LLMs somehow enabled them to write more code in the span of 3.5 years (assuming they started with ChatGPT's introduction) than they would be able to write in the span of decades. No studies have shown this. But at least one study did show that LLM devs overestimate how productive these systems make them.
thunky 10 hours ago [-]
> How?
You're calling this person a liar because they don't have a study to back up their personal anecdote. Which is a strange position to take imo.
anonnon 8 hours ago [-]
It's strange that I don't accept unverified anecdotes on their face, especially when they contradict the best evidence available? Also
> calling this person a liar
"Liar" implies a deliberate attempt to deceive, but I specifically mentioned the possibility that these tools just make you feel much more productive than you actually are, as at least one study found. But I'm sure a lot of these anecdotes are, in fact, lies from liars (bots/shills). The fact that Anthropic has to resort to stuff like this: https://news.ycombinator.com/item?id=47282777
should make everyone suspicious of the extravagant claims being made about Claude.
thunky 23 minutes ago [-]
You're the only one in this thread that mentioned 2x and 10x productivity boosts and studies.
Obviously everyone has their own experiences with LLMs. But I think it's an interesting position to take to tell random people that their reported experience is wrong. Or how you could be so certain that LLMs can't possibly be that useful.
heavyset_go 15 hours ago [-]
> I believe the author owns the language.
Not according to the US Copyright Office. It is 100% LLM output, so it is not copyrighted, thus it's free for anyone to do anything with it and no claimed ownership or license can stop them.
wild_egg 15 hours ago [-]
Do you have a citation for that?
heavyset_go 15 hours ago [-]
Yes[1]. Copyright applies to human creations, not machine generated output.
It's possible to use AI output in human created content, and it can be copyrightable, and substantiative, transformative human-creative alteration of AI output is also copyrightable.
> This analysis will be “necessarily case-by-
case” because it will “depend on the circumstances, particularly how the AI tool operates and
how it was used to create the final work.”
This seems the opposite of the cut and dry "cannot be copyrighted" stance I was replying to.
kccqzy 11 hours ago [-]
Yes it does depend on the circumstances. You are free to waste your own time to try this at the copyright office, but in my opinion, this project's 100% LLM output where the human element is just writing prompts and steering the LLM is the same circumstance as my linked case where the human prompted Midjourney 624 times before producing the image the human deemed acceptable. The copyright office has this to say:
> As the Office described in its March guidance, “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.”
aleksiy123 17 hours ago [-]
One topic of llms not doing well with UI and visuals.
I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.
Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.
CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.
You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.
Your commands structure will very roughly map to your resources or pages.
Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)
You can get it to build the remote storage, then the apis, finally the frontend.
All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.
I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.
Bnjoroge 17 hours ago [-]
Not to discount your experience, but I dont understand what's interesting about this. You could always build a programming language yourself, given enough time. Programming languages' constructs are well represented in the training dataset. I want someone to build something uniquely novel that's not actually in the dataset and i'll be impressed by CC.
asciimov 16 hours ago [-]
This takes all the satisfaction out of spending a few well thought out weekends to build your own language. So many fun options: compiled or interpreted; virtual machine, or not; single pass, double pass, or (Leeloo Dallas) Multipass? No cool BNF grammars to show off either…
It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.
fcatalan 14 hours ago [-]
This enables different satisfactions. You can still choose all your options but have a working repl or small compiler where you are trying them within minutes.
Also you decide how much in control you are. Want to provide a hand made grammar? go ahead, want the agent to come up with it just from chatting and pointing it to other languages, ok too. Want to program just the first arithmetic operator yourself and then save the tedium of typing all the others so you can go to the next step? fine...
So you can have a huge toy language in mere days and experiment with stuff you'd have to build for months by hand to be able to play with.
NuclearPM 15 hours ago [-]
Deciding on the syntax and semantics myself and using AI to help implement my toy language has been very rewarding.
Mine is an Io and Rebol inspired language that uses SQlite and Luajit as a runtime.
1.to 10 .map[n | n * n].each[n | n.say!]
pluc 18 hours ago [-]
Claude Code built a programming language using you
ramon156 18 hours ago [-]
AI written code with a human writted blog post, that's a big step up.
That said, it's a lot of words to say not a lot of things. Still a cool post, though!
ivanjermakov 18 hours ago [-]
> with a human writted blog post
I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
wavemode 18 hours ago [-]
We're definitely not at that point.
If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
craigmart 17 hours ago [-]
You can make an LLM sound very natural if you simply ask for it and provide enough text in the tone you’d like it to reproduce. Otherwise, it’s obvious that an LLM with no additional context will try to stick to the tone the company aligned it to produce
exitb 17 hours ago [-]
”I named it Cutlet after my cat. It’s completely legal to do that.”
I’ve never seen LLM being able to produce these kind of absurdist jokes. Or any jokes, really.
BoredomIsFun 1 hours ago [-]
here - an AI generated dadjoke:
Why did the cabbage refuse to testify in court? It didn't want to be grilled and then shredded.
Seems to be knew and very corny.
craigmart 15 hours ago [-]
comedy is a completely different thing than natural tone. I agree that they’re incapable of coming up with decent jokes
sakesun 5 hours ago [-]
Agree. I keep asking LLMs to tell me some jokes from time to time, but never once I've found it's funny. For me, when I find myself burst out laughing from LLMs joke, I'd know we've reached AGI.
BoredomIsFun 50 minutes ago [-]
You doing it wrong way - you should not ask for jokes - you need to structure prompt so jokes are byproduct.
wavemode 13 hours ago [-]
I never claimed that you can't get natural tone out of an LLM. What I said was that you can't get this blog post out of one.
By all means, go read the post and then try to do so.
Bnjoroge 17 hours ago [-]
Agree. I've been yearning for more insightful posts and there's just not alot of them out there these days
tines 18 hours ago [-]
Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.
ajay-b 18 hours ago [-]
Why not let Claude do our dating? I'm surprised someone hasn't thought of this: AI dating, let the AI find and qualify a date for you, and match with the person who meets you, for you!
g3f32r 18 hours ago [-]
I suspect this is going to be an iteration of the Simpsons meme soon, but...
I am kind of doing that now. I put Kimi K2.5 into a Ralph Loop to make a Screeps.com AI. So far its been awful at it. If you want to track its progress, I have its dashboard at https://balsa.info
knicholes 16 hours ago [-]
Honestly some of the most fun I had playing Ultima Online was writing scripts to play it for me.
monster_truck 13 hours ago [-]
The stun -> disarm -> pickpocket -> bludgeon defenseless player scripts are still the most fun I've ever had in an MMO.
jaggederest 18 hours ago [-]
I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler
marginalia_nu 18 hours ago [-]
Yeah it's a rewarding project. Getting a language that kinda works is surprisingly accessible. Though we must be mindful that this is still the "draw some circles" pane. Producing the rest of the rest of the famous owl is, as always, the hard bit.
Copyrightest 13 hours ago [-]
[dead]
soperj 18 hours ago [-]
We did this in 4th year comp-sci.
18 hours ago [-]
laweijfmvo 18 hours ago [-]
Using LLMs to invent new programming languages is a mystery to me. Who or what is going to use this? Presumably not the author.
matthews3 18 hours ago [-]
AI generate some feedback, then just move onto the next project, and repeat.
Similar experience building a product solo with AI. The spec-first workflow you describe is very real. I converged on something similar after getting burned way too many times :(
One thing I'd add: even with good specs, the agent still cuts corners in ways that are hard to catch. It'll implement a feature but quietly add a fallback that returns mock data when the real path fails. Your app looks like it works. It
doesn't. You find out in production.
Or it'll say "done" and what it did was add a placeholder component with a TODO. So now I have trust issues and I review everything, which kind of defeats the "walk away from the computer" part.
The "just one more prompt" loop is so true lol.
kreek 12 hours ago [-]
This is the second "I built a programming language" post in a day, and if I post the one I'm building, we can have a three-day streak :D They thought AI meant personal software, but it also means personal programming languages!
In all seriousness, this is great, and why not? As the post said, what once took months now takes weeks. You can experiment and see what works. For me, I started off building a web/API framework with certain correctness built in, and kept hitting the same wall: the guarantees I wanted (structured error handling, API contracts, making invalid states unrepresentable) really belonged at the language level, not bolted onto a framework. A few Claude Code sessions later, I had a spec, then a tree-sitter implementation, then a VM/JIT... something that, given my sandwich-generation-ness, I never would have done a few months ago.
bfivyvysj 12 hours ago [-]
I should post number 4, last week I built a new lisp framework for LLMs as first class programmers. It compiles for go, python, and JS.
dybber 16 hours ago [-]
I have been trying this as well, and you can quickly come very far.
However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.
p0w3n3d 17 hours ago [-]
I'd say these times will be filled with a lot of tailored-to-you "self"-made software, but the question is, are we increasing amount of information in the world? I heard that claude and chatgpt are getting good at mathematical proofs which give really something to our knowledge, but all other things are neutral to entropy, if not decreasing. Strange time to live in, strange valuations and devaluations...
NuclearPM 15 hours ago [-]
Neutral to entropy? What do you mean?
ractive 14 hours ago [-]
> [...] “just one more prompt” [...]. That creates a strong urge to try using it for everything all the time. And just like with slot machines, the [house](https://www.anthropic.com) always wins.
I really liked that part - the house always wins.
amelius 18 hours ago [-]
The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.
geon 18 hours ago [-]
That’s git commits.
amelius 18 hours ago [-]
That's arguably not very ergonomic, which is probably the biggest requirement for a programming language.
beepbooptheory 16 hours ago [-]
Why care about ergonomics if you're not going to write the code?
It has not had any issues at all writing objc3 code
Copyrightest 13 hours ago [-]
[dead]
randallsquared 16 hours ago [-]
> The @ meta operator also works with comparisons.
I haven't read any farther than this, yet, but this made me stutter in my reading. Isn't a comparison just a function that takes two arguments and returns a third? How is that different from "+"?
12 hours ago [-]
jackby03 17 hours ago [-]
Curious how you handled context management as the project grew — did you end up with a single CLAUDE.md or something more structured? I've been thinking about this problem and working on a standard for it.
This is something I've been thinking a bit about in the last few months.
TL;DR I don't think an LLM can create a language from scratch better than what we have. To LLMs they operate on a hoffoman coded format. (Generalization). For them, you probably could communicate directly in the token representation and you'd be better off. The actual language understanding for the LLM is probably very inefficent.
For human languages, I think there is opportunity here where you can build up intelligence on common reusable patterns and find places to optimize the usage, or break them down in a more cpu/readable way.
shadeslayer 16 hours ago [-]
It’s been a while friend
Congratulations on getting to the front page ;)
jcranmer 18 hours ago [-]
I recently tried using Claude to generate a lexer and parser for a language i was designing. As part of its first attempt, this was the code to parse a float literal:
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.
At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.
dboreham 17 hours ago [-]
I had a somewhat experience with Claude coding an Occam parser but I just let it do it's thing and once I had presented it with a suitable suite of test source code, it course corrected, refactored and ended up with a reasonable solution. The journey was a bit different to an experienced human developer but the results much the same and perhaps 100X cheaper.
jcranmer 16 hours ago [-]
Some of the issues are undoubtedly that I have a decidedly non-standard architecture for my system that the AI refuses to acknowledge--it hallucinated things like integers, which isn't a part of my system, simply because what I have looks almost like a standard example expression grammar so clearly I must have all of the standard example expression grammar things. (This is a pretty common failure mode I've noticed in AI-based systems--when the thing you're looking for is very similar to a very notable, popular thing, AI systems tend to assume you mean the latter as opposed to the former.)
righthand 18 hours ago [-]
> I’ve also been able to radically reduce my dependency on third-party libraries in my JavaScript and Python projects. I often use LLMs to generate small utility functions that previously required pulling in dependencies from NPM or PyPI.
This is such an interesting statement to me in the context of leftpad.
rpowers 18 hours ago [-]
I'm imagining the amount of energy required to power the datacenter so that we can produce isEven() utility methods.
BoredomIsFun 46 minutes ago [-]
You could always run local on your 5060ti.
righthand 17 hours ago [-]
Also, neither over the wire dependency issues or code injection issues (the two major criticisms) are solved by using an llm to produce the code. Talk about shifting complexity. It would be better if every LSP had a general utility library generator built in.
nefarious_ends 18 hours ago [-]
we need a caching layer
craigmcnamara 18 hours ago [-]
Now anyone can be a Larry Wall, and I'm not sure that's a good thing.
nz 18 hours ago [-]
This is not exactly novel. In the 2000s, someone made a fully functioning Perl 6 runtime in a very short amount of time (a month, IIRC) using Haskell. The various Lisps/Schemes have always given you the ability to implement specialized languages even more quickly and ergonomically than Haskell (IMHO).
This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).
zahirbmirza 17 hours ago [-]
"Just one more prompt..." I can relate. who else has been affected by this?
ractive 14 hours ago [-]
Yes, it completely sucks you in and you do "just one more prompt" until late in the night. And somehow you wake up with headache the next morning...
dwedge 17 hours ago [-]
Admittedly I only skimmed this, but I found it interesting that they came to the conclusion that Claude is really bad at (thing they know how to do, and therefore judge ) and really good at (thing they don't know how to do or judge).
I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."
mrsmrtss 17 hours ago [-]
I had the exact same thoughts reading it.
grumpyprole 17 hours ago [-]
Does this really test Claude in a useful way? Is building a highly derivative programming language a useful use case? Claude has probably indexed all existing implementations of imperative dynamic languages and is basically spewing slop based on that vibe. Rather than super flexible, super unsafe languages, we need languages with guardrails, restrictions and expressive types, now more than ever. Maybe LLMs could help with that? I'm not sure, it would certainly need guidance from a human expert at every step.
esafak 13 hours ago [-]
This was a missed opportunity to showcase how to use formal methods for proof of correctness. The author does not even seem to be particularly interested in programming language design; there is no discussion of design goals, or inspiration. Nothing to see here.
shevy-java 17 hours ago [-]
That was step #1.
Step #2 is: get real people to use it!
mriet 18 hours ago [-]
Wait. You built a new language, that there's thus no training data for.
Who the hell is going to use it then? You certainly won't, because you're dependent on AI.
logicprog 18 hours ago [-]
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
it's a valid question and one that everyone should be asking, unless ofcourse it's for fun which is what I believe this is.
croes 17 hours ago [-]
It isn’t shallow.
Who’s going to use it?
fcatalan 13 hours ago [-]
You tell the agent to write a whimsical tutorial book about the language, it takes about an hour :)
dankwizard 13 hours ago [-]
We're in the process of migrating our entire code base over to this new language (One of the big 4 banks) - Keen to add early adopters to our resumes : - )
koolala 18 hours ago [-]
With clear examples in their context they don't need training data.
18 hours ago [-]
atoav 17 hours ago [-]
I rolled a fair dice using ChatGPT.
kerkeslager 18 hours ago [-]
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).
The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).
This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.
These "guardrails" are made of silly putty.
EDIT: Would downvoters care to share an explanation? Preferably one they thought of?
octoclaw 18 hours ago [-]
[dead]
aplomb1026 18 hours ago [-]
[dead]
dehkopolis 18 hours ago [-]
[dead]
sabinbir 12 hours ago [-]
[dead]
iberator 17 hours ago [-]
Nope. You didn't write it. You plagiarized it. AI is bad
cptroot 14 hours ago [-]
If you read TFA, you'll find that the author agrees with you - at least on your first point.
While I agree "AI is bad", well-written posts like this one can provide real insight into the process of using them, and reveal more about _why_ AI is bad.
Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
If there are millions of lines on github in your language.
Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.
The vast majority of tokens are not used for documentation or reference material but rather are for reasoning/thinking. Unless you somehow design a programming language that is just so drastically different than anything that currently exists, you can safely bet that LLMs will pick them up with relative ease.
You can do it today if you are willing to pay (API or on top of your subscription) [0]
> The 1M context window is currently in beta. Features, pricing, and availability may change.
> Extended context is available for:
> API and pay-as-you-go users: full access to 1M context
> Pro, Max, Teams, and Enterprise subscribers: available with extra usage enabled
> Selecting a 1M model does not immediately change billing. Your session uses standard rates until it exceeds 200K tokens of context. Beyond 200K tokens, requests are charged at long-context pricing with dedicated rate limits. For subscribers, tokens beyond 200K are billed as extra usage rather than through the subscription.
[0] https://code.claude.com/docs/en/model-config#extended-contex...
That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?
There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.
In go every third line is a noisy if err check.
Claude seems more consistently _concise_ to me, both in web and cli versions. But who knows, after 12 months of stuff it could be me who is hallucinating...
Code readability is another, correlating one, but this is more subjective. To me go scores pretty low here - code flow would be readable were it not for the huge amount of noise you get from error "handling" (it is mostly just syntactic ceremony, often failing to properly handle the error case, and people are desensitized to these blocks so code review are more likely to miss these).
For function signatures, they made it terser - in my subjective opinion - at the expense of readability. There were two very mainstream schools of thought with relation to type signature syntax, `type ident` and `ident : type`. Go opted for a third one that is unfamiliar to both bases, while not even having the benefits of the second syntax (e.g. easy type syntax, subjective but that : helps the eye "pattern match" these expressions).
Meanwhile Go will have some boilerplate at every single level
Errors as values can be made ergonomic, there is the FP-heavy monadic solution with `do`, or just some macro like Rust. Go has none of these.
You don’t have to hate Go to agree that Rust’s `?` operator is much nicer when all you want to do is propagate the error.
Programming languages function in large parts as inductive biases for humans. They expose certain domain symmetries and guide the programmer towards certain patterns. They do the same for LLMs, but with current AI tech, unless you're standing up your own RL pipeline, you're not going to be able to get it to grok your new language as well as an existing one. Your chances are better asking it to understand a library.
How will it "learn" anything if the only available training data is on a single website?
LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.
I'm using Claude Code to work on something involving a declarative UI DSL that wraps a very imperative API. Its first pass at adding a new component required imperative management of that component's state. Without that implementation in context, I told Claude the imperative pattern "sucks" and asked for an improvement just to see how far that would get me.
A human developer familiar with the codebase would easily understand the problem and add some basic state management to the DSL's support for that component. I won't pretend Claude understood, but it matched the pattern and generated the result I wanted.
This does suggest to me that a language spec and a handful of samples is enough to get it to produce useful results.
I have done exactly the above with great success. I work with a weird proprietary esolang sometimes that I like, and the only documentation - or code - that exists for it is on my computer. I load that documentation in, and it works just fine and writes pretty decent code in my esolang.
"But that can't possibly work [based on my misunderstanding of how LLMs work]!" you say.
Well, it does, so clearly you misunderstand how they work.
Probably if you’re trying to be esoteric and arcane then yeah, you might have trouble, but that’s not normally how languages evolve.
With more explicit types and dataflow information, the model doesn't need to simulate execution (something LLMs are particularly bad at) as much as recognize and extend a transformation graph (something LLMs are particularly good at). So it's probably just that your particularly weird language is particularly well-adapted to LLM technology.
The impact that lack of training data has on the quality of the results is easily observable. Try getting them to maintain a Python codebase vs. e.g. an Elixir one. Not just generate short snippets of code, but actually assist in maintaining it. You'll constantly run into basic issues like invalid syntax, missing references, use of nonexistent APIs, etc., not to mention more functional problems like dead, useless, or unnecessarily complicated code. I run into these things with mainstream languages (Go, Python, Clojure), so I don't see how an esolang could possibly fair any better.
But then again, the definitions of "just fine" and "decent" are subjective, and these tools are inherently unreliable, which is where I suspect the large disconnect in our experiences comes from.
Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.
One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.
I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:
1) It maximizes local reasoning and minimizes global complexity
2) It makes the vast majority of bugs / illegal states impossible to represent
3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)
4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)
The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
My language is a step ahead of Rust, but not as strict as Ada, while being easier to read than Swift (especially where concurrency is involved).
By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.
So yeah for some things we are already at the point of "I am not longer the coder, I am the architect".. and it's scary.
a vibe coded
programming language,
I would ask my LLM. Not go on HN.
That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].
The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.
Sure it made a mistake, but it is right there, you could go again.
Pull the lever, doesn't matter if the kids have Karate at 8 AM.
[1] - https://github.com/t3rmin4t0r/magic-partitioning
If you can write a blogpost for this i'd like to read it.
That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.
At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.
Going into the vault!
I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.
I guess you're just not going to believe what anyone says.
How? They claimed LLMs somehow enabled them to write more code in the span of 3.5 years (assuming they started with ChatGPT's introduction) than they would be able to write in the span of decades. No studies have shown this. But at least one study did show that LLM devs overestimate how productive these systems make them.
You're calling this person a liar because they don't have a study to back up their personal anecdote. Which is a strange position to take imo.
> calling this person a liar
"Liar" implies a deliberate attempt to deceive, but I specifically mentioned the possibility that these tools just make you feel much more productive than you actually are, as at least one study found. But I'm sure a lot of these anecdotes are, in fact, lies from liars (bots/shills). The fact that Anthropic has to resort to stuff like this: https://news.ycombinator.com/item?id=47282777
should make everyone suspicious of the extravagant claims being made about Claude.
Obviously everyone has their own experiences with LLMs. But I think it's an interesting position to take to tell random people that their reported experience is wrong. Or how you could be so certain that LLMs can't possibly be that useful.
Not according to the US Copyright Office. It is 100% LLM output, so it is not copyrighted, thus it's free for anyone to do anything with it and no claimed ownership or license can stop them.
It's possible to use AI output in human created content, and it can be copyrightable, and substantiative, transformative human-creative alteration of AI output is also copyrightable.
100% machine generated code is not copyrightable.
[1] https://newsroom.loc.gov/news/copyright-office-releases-part...
This seems the opposite of the cut and dry "cannot be copyrighted" stance I was replying to.
> As the Office described in its March guidance, “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.”
I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.
Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.
CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.
You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.
Your commands structure will very roughly map to your resources or pages.
Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)
You can get it to build the remote storage, then the apis, finally the frontend.
All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.
I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.
It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.
Also you decide how much in control you are. Want to provide a hand made grammar? go ahead, want the agent to come up with it just from chatting and pointing it to other languages, ok too. Want to program just the first arithmetic operator yourself and then save the tedium of typing all the others so you can go to the next step? fine...
So you can have a huge toy language in mere days and experiment with stuff you'd have to build for months by hand to be able to play with.
Mine is an Io and Rebol inspired language that uses SQlite and Luajit as a runtime.
1.to 10 .map[n | n * n].each[n | n.say!]
That said, it's a lot of words to say not a lot of things. Still a cool post, though!
I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
I’ve never seen LLM being able to produce these kind of absurdist jokes. Or any jokes, really.
Why did the cabbage refuse to testify in court? It didn't want to be grilled and then shredded.
Seems to be knew and very corny.
By all means, go read the post and then try to do so.
Black Mirror did it first https://en.wikipedia.org/wiki/Hang_the_DJ
One thing I'd add: even with good specs, the agent still cuts corners in ways that are hard to catch. It'll implement a feature but quietly add a fallback that returns mock data when the real path fails. Your app looks like it works. It doesn't. You find out in production.
Or it'll say "done" and what it did was add a placeholder component with a TODO. So now I have trust issues and I review everything, which kind of defeats the "walk away from the computer" part.
The "just one more prompt" loop is so true lol.
In all seriousness, this is great, and why not? As the post said, what once took months now takes weeks. You can experiment and see what works. For me, I started off building a web/API framework with certain correctness built in, and kept hitting the same wall: the guarantees I wanted (structured error handling, API contracts, making invalid states unrepresentable) really belonged at the language level, not bolted onto a framework. A few Claude Code sessions later, I had a spec, then a tree-sitter implementation, then a VM/JIT... something that, given my sandwich-generation-ness, I never would have done a few months ago.
However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.
I really liked that part - the house always wins.
It has not had any issues at all writing objc3 code
I haven't read any farther than this, yet, but this made me stutter in my reading. Isn't a comparison just a function that takes two arguments and returns a third? How is that different from "+"?
TL;DR I don't think an LLM can create a language from scratch better than what we have. To LLMs they operate on a hoffoman coded format. (Generalization). For them, you probably could communicate directly in the token representation and you'd be better off. The actual language understanding for the LLM is probably very inefficent.
For human languages, I think there is opportunity here where you can build up intelligence on common reusable patterns and find places to optimize the usage, or break them down in a more cpu/readable way.
Congratulations on getting to the front page ;)
At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.
This is such an interesting statement to me in the context of leftpad.
This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).
I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."
Step #2 is: get real people to use it!
Who the hell is going to use it then? You certainly won't, because you're dependent on AI.
https://news.ycombinator.com/newsguidelines.html
Who’s going to use it?
The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).
This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.
These "guardrails" are made of silly putty.
EDIT: Would downvoters care to share an explanation? Preferably one they thought of?
While I agree "AI is bad", well-written posts like this one can provide real insight into the process of using them, and reveal more about _why_ AI is bad.