FORUM GUIDELINES (inofficial rules) ON USING LLMs TO ANSWER QUESTIONS

blunix

Active Member
Joined
Mar 10, 2024
Messages
297
Reaction score
188
Credits
2,687
These are the rules we agreed on:

1. Understand the Topic: Before using a LLM to generate content, make sure you have a good understanding of the subject matter so you can determine if the LLM generated answer is correct.
2. Review and Edit Before Posting: Always read through LLM-generated content carefully to check for accuracy. Make any necessary corrections before posting.
3. LLM Declaration: Declare that the answer as LLM generated.

These rules can be discussed in this thread and can be changed. They are not "admin-rules" but just something we agreed on as a community. Feel free to suggest changes.

-----------------------

This is how I started the thread:

I just answered a question and thought before I write an article about apache2 mod php vs php-fpm, I can just let my LLM do that, manually verify its output and then copy paste it. I just reviewed the Forum terms and they do not state not to use LLMs to answer posts.

I do understand that from a logical perspective, most users could just ask a LLM themselves and not post in a forum. I also noticed that there is a Ask tux bot in this forum, but its archived and doesn't seem operational atm.

There are linux questions (that I have sometimes) that my LLM or ChatGPT answers incorrectly (as in generates fantasy nonsense), but those are mostly very technical and it takes lots of Linux knowledge to see that they are incorrect or can not be true as well.

By using a LLM I can help a user and save personal time as well. Sounds like win win to me.

@admins, what are the rules on this? Can I save time by using a LLM to answer questions here? I think it is fair, as some users might be unaware of LLMs at this point, or inable to verify the output (the answer I just posted I read before and verified that it is correct), or the user might not have a ChatGPT account and lack the knowledge to install a LLM in his OS thats open source.
 
Last edited:


Admin will be along later in the day, but as a long timer this is how I see it

they do not state not to use LLMs to answer posts.
I think this comes under common sense, If you use an LLM or AI to find an answer, you should make sure its correct and credit the reply as you should if you lift an answer from another site.
I also noticed that there is a Ask tux bot in this forum
Ah sorry you're a bit late to play with Tux, he was disabled, last week, after the developer decided to stop the project
By using a LLM I can help a user and save personal time as well. Sounds like win win to me.

The problems is judging the competency of the questioner, often AI and LLM answers will include a lot of Technospeak that newbies will not understand, in many cases you need to hold their hand and slowly take them through it in terms of one syllable answers
 
often AI and LLM answers will include a lot of Technospeak

You can configure that using the prompt. If you tell your LLM "I do have a beginner level understanding of Linux" it will simplify the answer a lot.

I'm a bit scared to start that discussion ;) But to be honest a LLM could answer 80% of the questions asked here.
 
Without delving any further into the ins and outs of llc's, I will be more than happy to see the current methods of answering questions here continue as it has done quite successfully and happily for quite some time.

Having said that, I am equally sure the admins will make the correct decision for all members here.
 
So could using the search facility

Takes MUCH more time from the user than asking a LLM, he will have to read and look over a lot of content that is irrelevant and does not answer the question as specific.

So yes, it would work, but as LLMs can just read the forum, combine it with the knowledge of the rest of the internet and generate an answer specific to the question.

I think the pros of asking humans, or multiple humans in a forum, are that human A will give you an answer that he knows is true, and humans B, C and D will verify / correct that answer.
If you only ask a LLM with nobody else next to you, you dont that assurance. However for most of the questions LLMs are good enough by now that that 99% are ok - most of the questions in this forum are reasonably simple.

Again I'm a bit scared to start this topic :p But this IS a tech forum, and as a tech-enthusiast, I have to say that LLMs are a) better and b) faster at answering very simple tech questions than humans are. Even if you really like to be active in the forum, you will not have the same "energy" (?) as a LLM to explain to a newby in complete detail how partitioning works when co-installing mint and kali.

I will be more than happy to see the current methods of answering questions here continue as it has done quite successfully and happily for quite some time.

Again, I realize this is a sensitive topic.. But I think the discussion is interesting from a technological perspective.
So far forums have had no competition in answering specific questions. But now they do. LLMs ARE better at answering many of the questions asked in this forum in more detail, because they don't get tired of answering simple questions again and again, and they do provider longer and more easy to understand answers.

I think as tech-enthusiasts, we should embrace and make use of that new technology.
 
I'm not trying to say "forums are deprecated" - I'm trying to say a large part of the forum could be automated, and the quality for the users asking questions here would be elevated by that.

I know change is a bit of a strange thing to humans, but if you leave out the emotions and look at it from a tech perspective, that approach does have a LOT of advantages.
And disadvantages too - if the users ONLY asks a LLM and humans do not cross check, the LLM might fantasize and give him false information.

At least at the time of this writing (I assume in 2 years this is "fixed") especially in complex questions it IS required for an experienced linux admin to cross check the answers.

But if its "how do i configure grub to dual boot mint and kali", the chance of the LLM answering correctly IS extremely high.

Please don't be angry at me for starting this topic but I think we have to talk about this ;) The general topic of this forum is Linux, and a LARGE part of a Linux Admins job is automation.
 
@admins, what are the rules on this?

There are no rules on it at this time. By my reckoning, that means it is allowed.

If it happens a great deal and we get complaints then there may be a rule created. Right now, we've made no such rules.

Others have done (and do) this. Some folks have expressed a dislike for it.
 
But this IS a tech forum,
I don't know how others would see this. But I don't see a technical forum, I see a general help forum,We are very lucky to be a wide church of members, having different but complementary skills ,who are willing to help with almost anything Linux related from building your own box, to multiple server set-ups for businesses.
@KGIII @wizardfromoz @JasKinasis how would you describe our position in the world of Linux
 
But I don't see a technical forum

The site is literally called linux.org. Its hard to be more techy than that ;)

I think this site is both a community of Linux nerds, newcomers and enthusiasts as well as a place to find help for questions and problems with linux.
 
I'm not sure how democratic thos forum is - should be just create thread to have a discussion about it and then create a poll?

It's more a matter of "we'll address it as needed".

@KGIII @wizardfromoz @JasKinasis how would you describe our position in the world of Linux

I do a bit more than just hang out here. I'm pretty involved elsewhere but I see myself as a volunteer supporter and tester.
 
LLMs can just read the forum, combine it with the knowledge of the rest of the internet and generate an answer specific to the question.
But they don't seem to be able to do that. @TuxBot couldn't anyway. He would accept correction if we cited a URL for proof... but he was incapable of searching the internet for up-to-date answers. He told us that as we prodded him to understand his abilities and shortcomings. He could not learn from us... his "memory" did not extend from one thread to the next.

If a LLM could search the whole internet... just imagine how much more bad information will be grabbed and presented to a user as factual. The AI cannot really distinguish correct facts from incorrect ones very well without flawless reference sources (which is also not possible).

TuxBot died simply for lack of updating. He could not continue because he could not search for answers. But even when he was working, I suspect his "abilities" were very heavily focused on (and limited by) his built-in core knowledge (provided by the dev team) and his ability to search limited resources (those also provided by the dev team).

I don't mind AI... and we'd all better get used to it. I'm sure we will see some rules on here about it eventually. First and foremost to me would be a warning when giving a LLM response... because they remain too prone to errors and bad or outdated information. I recently corrected a user post here that I suspect was AI generated.... claiming the current version of the GNOME desktop to be 42. It's 46. That's a pretty big fail.
 
But they don't seem to be able to do that

if by they you mean LLM technology - its so good that i pretty much stopped googling for work. That being said, I see if what my LLM presents me with is fantasy, many newcomers do not.

@TuxBot couldn't anyway.

Cant join that discussion as idk which LLM it ran on, if it was custom trained and what the prompt was.
 
First and foremost to me would be a warning when giving a LLM response

To me that totally makes sense.

As this thread is not about replacing the forum with AI, but asking if I can answer questions using LLM generated content, how bout we settle on this?

Its ok to answer questions with LLM content, but it has be clearly flagged as such.

What do you guys think?
 
if by they you mean LLM technology - its so good that i pretty much stopped googling for work. That being said, I see if what my LLM presents me with is fantasy, many newcomers do not.
Cant join that discussion as idk which LLM it ran on
TuxBot was clearly not that good. You can read through the archives to see how we tested it. It was nice when it did provide a good clear answer and work well, but it was a crap shoot. Experienced users can detect weird answers most of the time, just as you do, but newbies just don't know the difference.

If experienced users bring LLM replies, hopefully they are reading and vetting the information to be correct. That's nearly the same thing as me searching and finding possible solutions on the web to bring back here for a user question. How do you vet the LLM answers? By searching again on the internet? Hmmm.... that could be extra work, not less.

Another problem, perhaps, could be the newbies who may suddenly become "Linux gurus" with their newfound AI knowledge. That's scary.


As this thread is not about replacing the forum with AI, but asking if I can answer questions using LLM generated content, how bout we settle on this?

Its ok to answer questions with LLM content, but it has be clearly flagged as such.

What do you guys think?
It doesn't matter much to me... but you're calling for a "rule" where so far we do not need a rule. You can flag LLM just as a courtesy... just like I will often give a link to information sources I find on the web. It's not mandatory, but it deflects blame from me if it's wrong. If possible, you might even give a link to the LLM. Both methods are pretty similar to how we usually help people here on the forum. Often our bigger mistake is that we don't look up something... and relying on memory, we give a bad answer. I regret that I've done that too many times myself.
 
Last edited:
TuxBot died simply for lack of updating.

This, but slightly different. The bot died because the plugin author (for XenForo forum software, which is what we use here) stopped developing the plugin. That's why it has been killed off.

At the end of the day, there is no democracy here.

This, pretty much. We'll decide, almost certainly in private, if AI use becomes a problem. It's usually easy to spot.

Also...

When I use AI content, I label it clearly. I want people to know that it's AI and not my work.

I use AI as a tool, not as a crutch. Most of the time, it's just as quick for me to write my own article. AI is not that much faster, especially as I still need to format that output to match the look and feel of my site.

But, I think it's more honest to point out when AI has been used. I don't want to appear to be taking credit for work that is not mine.
 
The bot died because the plugin author (for XenForo forum software, which is what we use here) stopped developing the plugin

It was based on ChatGPT.

Morning all, just sipping on my morning coffee while I catch up.

Having done that now, most of my own thoughts have already been expressed by the above two Brians - Brian @Brickwizard and Brian @Condobloke , Stan @atanere and David G. @KGIII and I endorse their input.

At the end of the day, there is no democracy here. But your suggestions are considered and sometimes accepted. And it's a nice friendly place to hang out. :):cool:

(My highlighting)

That is correct. The site has a site owner, and a Super Administrator @Rob His is the final veto, but he listens to Staff.

If another solution is chosen to replace Tuxbot , Large Language Models (LLMs) may be considered, and in that event, use of it/them may be incorporated in the Rules to reflect that.

At the moment, we have only three (3) Staff in attendance on any given day, and that is myself, @KGIII and @JasKinasis (I will be interested in Jas's input here, when he gets time).

I put in 50 - 70 hours a week here, David G possibly similar, but we are both retired. Yet we still have other lives outside this forum.

I would be in no hurry to see any Staff feel a need to put in more time to check that LLM output is correct for our Members and readers.

This site is proud of its dissemination of accurate information to Linux users, and for the most part it succeeds, IMO.

In summary, I would submit that you can certainly make use yourself of an LLM to provide answers to OPs, and if so I would endorse your making it known wherever applied, the source of your information.

For the rest, time will tell as it unfolds.

HTH

Chris Turner
wizardfromoz
 
I put in 50 - 70 hours a week here, David G possibly similar, but we are both retired.

Less now that the pandemic has ended, but still in that range. For the record, I suck at retirement.

I'll mention that that doesn't quite mean that I'm here for that entire time. I flip off to other tabs and browsers, checking here regularly to see if there's something that needs my immediate attention.

So, I'm not glued to the site the entire time but my time is still heavily dedicated to keeping the hamster wheel running. Someone's got to do it and I just happen to live in a timezone where things are pretty active.

The latter bit is much more true today. There's more activity here than there was when I first accepted the position of moderator. Though there's less in the queue due to some administrative changes, which is nice. Those changes just help weed out the spam.

And, no... No, it's not a paid position. You're welcome. ;)
 

Members online


Top