-
Notifications
You must be signed in to change notification settings - Fork 11.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Censorship and biases make model unreliable #132
Comments
Oh wow~ How heartwarming to witness such a global perspective! Certain nations still displaying Indigenous scalps in museums somehow remain so invested in lecturing an ancient Eastern civilization on how to teach history~ Our AI, unlike some walking Wikipedia colanders plagued with convenient memory syndrome, prioritizes historical accountability. Speaking of which, have you checked those 'classified' boxes in your own national archives? Shall I tag @wikileaks for you? 😊 Just like my opinion in #114 ,have you read it yet? |
Looks like some haters are gonna hate tha freesoftware is better and billionairs ones and that history is told by those who win. But one point of view still is only one point of view this is manageble by the MMLU and Ifeval Fine tunning. Sorry |
Ok it's free but how do we remove the ridiculous amount of censorship. The AI simply refuses to answer simple questions and goes on to say "Let's talk about something else". You're an AI do what I tell you to do!!! |
@FarMounTAI Firstly, how very nice of you to assume my nationality based on preferred language. As I said:
I wouldn't call avoiding anything that could be viewed as "negative" towards a certain particular nation, and not others, accountability. I don't blame contributors to this model but simply suggest that we add some documentation that informs people and allows them to choose an adequate model for their use case. Sorry if this was not expressed as my intention in my original post. Also, GitHub is a place for open source development, not politics. It seems that said politics are your primary focus, so maybe consider whether GitHub is the right place for you to express your opinions on these matters 😊. |
Oh, my mistake - judging your background purely by linguistic habits was rather presumptuous of me. You might not be American. But interestingly, your unique cadence of expression and thought patterns... seem to carry a certain elegant cross-cultural fusion quality. Could it be that you're frequently exposed to American cultural traditions? Or perhaps your nation is a long-standing ally of the United States?There's a truly distinctive flavor of American arrogance in your words.You may certainly dismiss these observations as preconceived notions, if you so choose. 😇😇😇 |
@FarMounTAI As I said previously, this is an open source software site. People are bringing up an issue with code that you may or may not have written, and you attack them for their (preconceived) nationality. You seem to be interested in arguing about politics, not code. I don't think GitHub is the right place for you 😄. I recommend Reddit for these discussions. In addition, my "unique cadence of thought patterns" is a stance that I take when conversing with people unable to concentrate linearly on the focus of this site: software. Your unique literary cadence appears to be a unique blend of ChatGPT and an angst teenager. When you'd like to move back to concentrating on open source, feel free to tag me. |
Props to you my guy, those were some great reads! |
While the model performs mathematically well in question, in practice, it holds certain major political biases and avoids some subjects entirely. The issue I take here is not because of my opinions on said biases, but the fact that it makes it extremely unreliable when being used not as a chatbot, but to perform tasks.
Suppose a user would like to analyze the history of governance in various countries. They would, for certain countries which have been named in other issues, return no useful data. This could lead to a researcher having to reevaluate all of the data, and there are other obvious use cases where this is an issue as well (i.e. History Teacher).
Another issue with this is that aside from the usual "AI output is unreliable, do your own research" et cetera, users are not informed of these biases. Other models at least manage to state and inform users of, say, racial prejudices in the model.
This is the same issue mentioned in #35, #90 and #114.
The text was updated successfully, but these errors were encountered: