Hacker news

  • Top
  • New
  • Past
  • Ask
  • Show
  • Jobs

Diverse perspectives on AI from Rust contributors and maintainers (https://nikomatsakis.github.io)

129 points by weinzierl about 6 hours ago | 71 comments | View on ycombinator

ysleepy about 5 hours ago |

I enjoyed reading theses perspectives, they are reasoned and insightful.

I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.

For another area, prose, literature, emails, I am firm in my rejection of gen AI. I read to connect with other humans, the price of admission is spending the time.

For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?

Can it even advance beyond patterns/approaches that we have built until then?

I have many more questions and few answers and both embracing and rejecting feels foolish.

henry_bone about 5 hours ago |

The industry and the wider world are full steam ahead with AI, but the following takes (from the article) are the ones that resonate with me. I don't use AI directly in my work for reasons similar to those expressed here[1].

For the record, I'll use it as a better web search or intro to a set of ideas or topic. But i no longer use it to generate code or solutions.

1. https://nikomatsakis.github.io/rust-project-perspectives-on-...

andai about 6 hours ago |

>It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.

In other words, one has to lean into the exact opposite tendencies of those which generally make people reach for AI ;)

_pdp_ about 6 hours ago |

AI ultimately breaks the social contract.

Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.

With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.

It was never about the LLM to begin with.

If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.

I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

https://github.com/orgs/community/discussions/185387

rusty1 about 4 hours ago |

What a sober read. Hey one of my old colleagues is on there. They are also one of the best engineers I have ever encountered - period. Nice person too.

I don't know what most people are doing in their day to day with AI but this is the closest to reality I have seen thus far. I have seen posts on here about how they have 4 agents making 50kloc a day or something and I can't reliably get a complete spec to output 50 lines of commit worthy code. Emphasis on reliable and commit worthy. I won't go into pros and cons, but I just don't see how everyone is even operating like this, especially with a team of people and any semblance of legacy code. Note, by legacy I do not mean old bad code, I mean pre existing code from various contributors.

For research on crates/integrations, I see some benefits, but sometimes I think it is because search engines have been enshittified and inundated with the top 100 results of nearly all queries being AI slop. 10 out of 10 times I would rather ping a person. Lately I have been assuming most pro-ai articles are written by LLMs or advertising campaigns and ignore it. It's working well so far. Still a top performer on my team...

undefined about 6 hours ago |

undefined

ghosty141 about 6 hours ago |

The title is misleading. It says in one of the first sentences:

> The comments within do not represent “the Rust project’s view” but rather the views of the individuals who made them. The Rust project does not, at present, have a coherent view or position around the usage of AI tools; this document is one step towards hopefully forming one.

So calling this "Rust Project Perspectives on AI" is not quite right.

gregfrank about 6 hours ago |

[dead]

olalonde about 5 hours ago |

I feel bad for people who reject LLMs on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.

userbinator about 4 hours ago |

Anything that uses the phrase "diverse perspectives" is not worth reading.

yonran about 5 hours ago |

Seems like a lot of people’s problems with AI come from talking to the dumber models and having it not provide sufficient proof that it fixed a bug. Maybe instead of banning AI, projects should set a minimum smarts level. e.g. to contribute, you must use gpt-5.4-codex high or better for either writing it or code reviewing it.