Replies: 3 comments 2 replies
-
|
My moral compass on this consists of the question "what are the effects and consequences of widespread adoption of this technology on our society and planet?" Does a 1-7b, locally running model used for moderating a website destroy our planets? Does it have a negative impact on the job market? I'd argue that the answer to both questions in this case is "No", and as such, I personally see no immediate problem with using genAI for this purpose—misalignment and training bias questions aside. Asking the same set of questions for genAI-powered image generators, the answer is different, in my opinion. The consequences of widespread use of this particular genAI tool are already known, and have manifested as mass-firings and an overall burden to the creative industry, including freelancers and artists. For each case where genAI based image generators may be of use as a part of a larger software, there are better, faster, more ethical alternatives available. |
Beta Was this translation helpful? Give feedback.
-
Semantic resonanceTo start with, I’d love to let the prompt engineers loose on our all our Pages, such as Values. I wanna see what kinds of PRs the bots or bot-assisted authors can come up with, especially in terms of relevant links for our existing content. |
Beta Was this translation helpful? Give feedback.
-
Docs-seeding by automated Q&AI have this hunch that, even if functionally flawed, ‘directionally correct’ first drafts of auto-generated docs could helpfully trigger people to finish off what a bot started, when they otherwise wouldn’t ever get around to doing any proper docs writing. Essentially I’m hypothesizing that the auto-docs might be able to tap into this phenomenon: https://xkcd.com/386/ Like, the bots are pretty good at the scaffolding. That’s what most of these “magic” generators of games and websites are, they’re just a slightly customized starter pack. That does not a complete project make (which is where the big disconnect happens in a lot of the discourse around the usefulness of these tools), but it can get people started more easily. For docs in particular, getting that first bit started can go a long way. We’ve talked about how we’d like tools such as DeepWiki to write way more succinctly, so that it’s more like a series of prompts for devs to finish rather than an attempt at a final doc product. My worry is that the AI is entirely dependent on excessive wordiness in order to function, because the only way they’re able to get some things right is by throwing everything they’ve got at the wall.
Yeh, that's the kind of recursiveness I feel could work really well with this sort of thing. Maybe instead of faux-factual declarations, the AI documenter should just be asking questions instead 🤔
Yeah, it's the same dynamic as the xkcd joke, but without the friction of misunderstanding. I'm increasingly of the conviction that the highest value application for a deepwiki-type app is to ask developers questions about codebases, rather than trying to automatically document them. In any case, the auto-documentation step becomes a lot more accurate if it’s derived from a preliminary Q&A with the developer(s). For anything that’s not CRUD or otherwise very predictable, the experience with these AI tools is more often than not like this: https://news.ycombinator.com/item?id=45021550 And every incorrect fabrication is worse than no documentation at all, because it sends learners down the wrong path and causes confusion and frustration. That’s why I’m much more interested in tooling that helps pull information out from the authoritative source, not the code but the developer who wrote the code, where the complete mental model of the system actually resides.
Mhm, something like this:
So as a first step, the RAG is used to ask good questions rather than writing any exhaustive docs. A DeepWiki-type app should assume the role of a new contributor who is trying to get started with the project. Beginning with the basics like how to set it up, and then trying to form a mental model of the project, based on what can be inferred from its readmes and codebase. This doesn't have to be very advanced AI-assistant bot stuff either; most of the essential questions can essentially be scripted. That's a feature, as it makes the auto-doc process more understandable when it's predictable, at least to a point. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The topic of genAI is on my mind a lot; how to talk about it, share materials derived from it and even make contributions with it.
I really appreciate everyone who actively engage with us in this community and I hate to see it cause so much division and antagonism. Having some ‘community guideline’ on AI-use in roomy/muni-town has been brought up as a useful artifact, so here’s the gist if it.
It’s tricky because we can’t simply enforce a blanket ban on all things genAI.
First of all there’s the general debate and necessary criticism about AI tech, which we must continue to have space for here to collectively develop our thoughts.
Then there is legitimate, production-use of genAI as it stands today, such as for spam-blocking and moderation.
I do know this:
We are not absolutely against any experimentation and exploration of AI tooling, but the burden of proof-of-usefulness is squarely on the AI tools, on account of their yet-accounted for massive negative externalities.
What does useful look like? For ever-more capable bots it’ll look like any kind of value-add in the form of code, docs, design-talk etc., eventually.
Personal posts
Beta Was this translation helpful? Give feedback.
All reactions