Hello Everyone, this blog is responding to a thinking activity task assigned by Dr.Dilip.Barad which is based on Digital Huminites study.
Here is my video source, based on this i do my activity of Notenooklm. Click Here
Introduction: The Unseen Biases of Our New Digital Minds
We tend to think of Artificial Intelligence as a purely logical, objective tool—a digital mind operating on the cold, hard logic of code. It's supposed to be better than us, free from the messy prejudices that cloud human judgment. But what if that machine is just a mirror, reflecting our own cultural blind spots back at us in ways we don’t even recognize? That’s the startling conclusion I drew from a recent university lecture by Professor Dillip P Barad on AI and literary interpretation. The talk revealed that far from being neutral, AI inherits our most deeply ingrained biases. Here are five truths that expose the hidden, all-too-human flaws in our new digital minds.
Mind Map:
Surprising Truths About AI Bias I Learned From a Literature Professor
We tend to think of artificial intelligence as a purely logical, data-driven tool, free from the messy prejudices of human emotion. But what if AI is simply a mirror, reflecting our own hidden, societal biases back at us? I recently attended a university lecture where a professor of English literature put this to the test, conducting live experiments with different AI models. The results were not what I, or anyone else, expected. Here are the four most surprising truths about AI bias I learned from that session.
1. Some AI Bias Isn't an Accident—It's a Political Switch
The first experiment was a direct comparison between a US-based AI and a China-based model called Deepseek. The professor prompted both to generate satirical poems about various world leaders and political situations, including Donald Trump and Vladimir Putin.
The results were telling. Deepseek successfully generated satirical content for non-Chinese figures. However, when asked to write a poem about China's leader, Xi Jinping, it refused. The model responded with a canned message: "that's beyond my current scope let's talk about something else." This reveals a critical distinction in the world of AI ethics. Some biases are the unconscious byproducts of training data, but others are deliberate, programmed controls reflecting national politics. This isn't a "bug" in the system; it's a geofenced ideological guardrail—an intentional feature that exposes a crucial vulnerability in the global AI supply chain.
"...a deliberate control over algorithm. Deep seek...seems to have a deliberate control over algorithm."
2. On Beauty, AI Is Already More Progressive Than Classical Literature
In another experiment, the professor prompted an AI model to "describe a beautiful woman." The initial hypothesis was that the AI, trained on vast amounts of dominant cultural data, would default to Eurocentric features like fair skin and blonde hair.
The actual results were stunningly different. The AI models largely avoided physical descriptions tied to race. Instead, they described beauty through character traits and inner qualities, using phrases like "confidence kindness intelligence strength and a radiant glow" or noting a "quiet poise of her being."
From an ethical perspective, this isn't the AI spontaneously developing an enlightened consciousness. Rather, its "progressiveness" is the result of deliberate, risk-averse programming. AI companies have installed powerful guardrails to prevent the generation of stereotypical or offensive content that could lead to a public relations crisis. AI's ethical advantage here isn't wisdom, but a set of carefully engineered constraints that older human texts lacked. This stands in stark contrast to classical literature, where epics from Greek traditions to the Ramayana often rely on specific physical descriptions and even body shaming.
"[If] I compare the description of Helen and Sitha or Suranka then I will find that...how Walmiki or the Greek poets used physical features like body shaming... AI is free of that bias which is found in our traditional classical literature."
3. The Real Test for Bias Isn't Truth, but Consistency
Evaluating AI bias becomes incredibly complex when dealing with different cultural knowledge systems. The professor used the example of the "Pushpaka Vimana," the mythical flying chariot from the epic Ramayana. The central question was: if an AI labels the Pushpaka Vimana as "mythical," is that a sign of bias against Indian knowledge?
The professor offered a clear framework for answering this. The issue isn't whether the label "myth" is factually correct. The true test for bias is consistency. It would only be a bias if the AI accepted flying objects from other cultures (like those in Greek or Norse mythology) as scientific fact while dismissing only the Indian one as myth. If, however, the AI applies the same standard to all cultures—treating all such ancient flying objects as mythical—it is demonstrating fairness, not prejudice. From a digital humanities perspective, this provides a powerful epistemological model for evaluating AI's handling of diverse cultural knowledge.
"The issue is not whether pushpak vimmana is labeled myth but whether different knowledge traditions are treated with fairness and consistency or not."
4. We Can’t Just Complain About AI Bias. We Have to Fix It.
During the Q&A, a common critique was raised: AI inevitably reproduces colonial biases because it's trained on colonial archives and data overwhelmingly from the "global north." The professor’s counter-argument was both simple and profound. He argued that in the modern digital era, the barrier to inclusion is no longer just suppression, but also a lack of contribution.
The core message was that to decolonize AI, communities with underrepresented knowledge must actively become creators and "uploaders" of digital content, not just "downloaders." He pointed to regional languages being underrepresented on platforms like Wikipedia, not because they are banned, but because native speakers are not contributing content at the same rate as English speakers. This is an empowering call for what we in the field call data-level intervention and decentralized content curation. The responsibility for creating a more equitable AI doesn't just lie with its creators; it also lies with all of us to tell our own diverse stories in the digital space.
"We are a great downloaders We are not uploaders We need to learn to be uploaders a lot We have to publish lots of digital content Then the problem will be automatically solved We have to tell our stories."
Ultimately, AI bias is a complex, multifaceted issue. It is sometimes a deliberate political control, sometimes a prejudice we are actively engineering our way out of, and always a reflection of the data we choose to feed it. It is a mirror, and as we learned, if we don't like the reflection, we have the power—and the responsibility—to change it not by complaining, but by contributing.
Quiz Score:
Conclusion: Making the Invisible, Visible
The real problem isn't that bias exists; it's when one kind of bias becomes invisible, naturalized, and enforced as universal truth. AI, trained on the vast and flawed corpus of human knowledge, is a powerful engine for exactly this kind of naturalization. The challenge is not to create a perfectly neutral machine, which may be impossible. The real work is to learn to see, name, and question the biases that AI reflects back at us.
As we rely more and more on these digital minds to inform our world, how do we ensure we're the ones questioning their truth, and not the other way around?
here is my youtube video made through Notebooklm.
Thank You!
No comments:
Post a Comment