So I entered Bazzite into Metas Ai client and this is what it gave me. Not even close. Did it just make it all up?
Bummer. I usually use Llama locally or at meta.ai, but just to sense-check as a baseline whether chatGPT is any better - this one it got right.
Was this the Llama 70B model or 3B model? Even the unpaid o1-mini model i’m on for chatGPT has a bunch of Bs and benefits from those cloud GPUs, I guess…
Seems to be catching up fast though, anecdotally!
It’s obvious, the AIs all halucinate.
Thats what this is? I call it make shit up out of thin air.
Right now this is exactly what it is. No more, no less.
Artificial intelligence, especially language models like me, can “hellucinate” because they work on probabilities and patterns based on the data they were trained with. This means that we are not really “understanding” what we say, but are based on what is statistically likely to fit a particular request or question. Hallucinations in AIs are mistakes where the answer generated is incorrect, invented or contradictory, even though it sounds plausible.
Here are some main reasons why this happens:
1. Lack of “world perception” or contextual understanding:
AIs do not have a real perception of the world or a deeper understanding of information. They only have access to a large amount of text they have processed and use them to generate answers. However, if they access information that they do not understand exactly or are incomplete, they can make false or fabricated statements.
2. Statistical patterns instead of real knowledge:
Language models such as GPT-4 recognize patterns in the training data and use them to predict next words. This means that if the model does not have a reliable source of an answer, it can fall back on general patterns that appear plausible but are not necessarily correct. It then “hallucinates” an answer that corresponds to the probability of what it has seen in similar texts, but is not based on verified knowledge.
3. Incomplete or incorrect training data:
Language models are trained on huge data sets, which come from texts from the Internet, books, scientific articles and many other sources. However, these data sets do not always contain complete, accurate or reliable information. In addition, outdated or contradictory data may be included, which increases the likelihood of hallucinations.
4. Combination of vague information:
When a model encounters incomplete or vague information, it may begin to combine different elements to generate a plausible response. This often leads to hallucinations, since there is no objective method to distinguish between “good” and “bad” sources when the context is unclear.
5. Limited ability to verify source:
AIs cannot check external sources in real time. Although you have access to a huge database, you cannot access up-to-date, reliable data in real time. This means that a model regarding newer or specialized information can only access the sources with which it has been trained. Without a system to verify facts or to differentiate between reliable and unreliable sources, it can generate incorrect information.
6. Complexity of human knowledge:
Many topics that affect people are complex, multi-layered and nuanced. AIs based primarily on statistical relationships have difficulty capturing and processing these nuances correctly. They therefore tend to provide simple answers that may not be right.
7. Optimization for coherence instead of accuracy:
The models are optimized to generate coherent and fluid responses that sound like human speech. As a result, the focus is on the “probability” of an answer that fits well into the context, rather than on the true to fact. This promotes hallucinations when coherence becomes more important than the exact truth.
How can hallucinations be reduced?
- Verification techniques: One possible solution is to develop systems that search sources in real time or integrate fact check mechanisms to check responses before they are released.
- Improve training data: When models are trained with better curated and more reliable data, hallucinations can be reduced.
- Multimodal approaches: The use of multimodal AIs that analyze not only text, but also images or other forms of data, could help to develop a deeper understanding and reduce hallucinations.
Overall, hallucinations in AIs are a well-known problem that goes hand in hand with the current technology, but there is continuous research to minimize these errors.
quoted from chatgpt.
Sounds like adisclaimer. It told me that it knew what an atomic desktop was but not what bazzite was, and thanked me for pointing out that it mislead and lied. I guess Bazzite has not been indexed by it yet, ok, but serving me garbage it just invented is unacceptable and irresponsible. (and dangerous when involving medical advice) It seems like it would rather invent/lie to you then just say I have no idea. LOL How to defrag XFS? sudo rm -r/ thanks!
Perplexity AI was not only on point, but took me into many sub topics like a boss. I guess it ahead in this area.
I can understand it being bad at debugging code but my question to me was straight forward enough, an apple is not an orange.
Bazzite runs on fluxbox? Just say I dont know yet. It admitted after I pointed out to it that it did not know nothing about bazzite, so why invent stuff? Still beats reading outdated random googled documentation though.
The AI would rather lie to you than say it doesn’t know.
That’s why you basically have to check everything again.
But you’re right. Perplexity has once again put a step on it by searching the Internet directly and is therefore much more up-to-date.
About Bazzite Perplexity writes:
Bazzite OS is an innovative Linux distribution based on Fedora Atomic Desktops, specifically optimized for gaming and handheld PCs. Here are the key features of Bazzite OS:
Core Architecture
Bazzite OS is built on Fedora Atomic Desktops, employing an immutable approach to system files. It utilizes atomic updates for risk-free system upgrades and supports various desktop environments such as KDE Plasma and GNOME.
Gaming Focus
The OS comes with pre-installed Steam and a Gaming Mode similar to SteamOS. It includes optimizations for responsive gameplay and improved CPU scheduling. Bazzite OS also supports HDR and VRR (Variable Refresh Rate) for enhanced gaming experiences.
Hardware Compatibility
Bazzite OS offers broad support for various handheld PCs, including Steam Deck, ASUS ROG Ally, and others. It features integrated drivers for NVIDIA and AMD graphics cards and includes a Handheld Daemon for extended functionality on handheld devices.
Additional Features
• Containerized application support for easy software installation
• Waydroid for Android app support
• Faster package updates compared to SteamOS
Pros and Cons
Pros:
• Wide range of customization options
• Rapid updates and latest drivers
• Flexibility through container support
Cons:
• Occasional bugs, though easily resolved through the rollback system
• More complex setup for advanced development tasks
Bazzite OS provides a powerful and flexible alternative to SteamOS, particularly for users who desire more control and customization options without sacrificing the benefits of a gaming-oriented distribution.