|
Received this from a friend?
|
|
CRITICAL STATE
|
Your weekly foreign policy fix.
|
|
|
|
|
|
|
|
If you read just one thing …
… read about AI gods, destroyers of context.
|
In religious texts, people seek to divine the will of beings not quite knowable by human perception. When it comes to AI chatbots, the feeling can be similar. The means of communication resemble that of inquisitive prayers and queries placed into the open repository, with the trust that an answer is knowable and coming. What happens, then, when people program a chatbot to respond with full religious authority, without a human intermediary to vouch for the automated text answers? In India’s case, the chatbot may reflect the direct biases of its creators. As Nadia Nooreyezdan reports for Rest of World, between January and March 2023, “at least five GitaGPTs have sprung up between January and
March this year, with more on the way.” These bots are based on the Bhagavad Gita, a 700-verse Hindu scripture, in which the Hindu god Krishna directly consults with the archer Arjuna and offers advice to help him overcome difficult moral questions. “Experts have warned that chatbots being allowed to play god might have unintended, and dangerous, consequences,” reports Nooreyezdan. “Rest of World found that some of the answers generated by the Gita bots lack filters for casteism, misogyny, and even law. Three of these bots, for instance, say it is acceptable to kill another if it is one’s dharma or duty.”
|
|
|
Puppeteering The Past
|
|
Artificial intelligence tools for generating images are so commonplace now that the art made through computer prompts is inviting its own genre of criticism. While the bored email workers of the world plug in “Wes Anderson’s Teenage Mutant Ninja Turtles” and then share a curated selection of the authorless images, there’s a curious trait spotted in how AI renders human faces. When people in AI images smile, they smile like Americans, and specifically, like Americans faking happiness.
|
|
|
“In the same way that English language emotion concepts have colonized psychology, AI dominated by American-influenced image sources is producing a new visual monoculture of facial expressions. As we increasingly seek our own likenesses in AI reflections, what does it mean for the distinct cultural histories and meanings of facial expressions to become mischaracterized, homogenized, subsumed under the dominant dataset? In the AI-generated visual future, will we know that Native Americans didn’t smile for photos like WW2 US Navy Officers?” writes Jenka Gurfinkel at Medium.
|
|
|
|
|
In Gurfinkel’s essay, focused heavily on a set of “Time Period Selfies,” she highlights that not only do we have photographic evidence that Native Americans didn’t pose or smile the way AI images indicate they would, but that even World War II Navy sailors adopted more muted expressions. Drawing such images with AI, therefore, isn’t a window into the past but a reflection of the present in historical costume.
|
|
|
|
|
|
drone ranger
|
|
The Yurok Tribe has lived along the Klamath River in Northern California since before there was a United States or a California. Like many other Indigenous groups within the United States, they are (to a degree) sovereign over their own territory, though still part of the greater whole of the nation. Currently, the Yurok Tribe is facing a crisis of disappearance, where the number of women suffering from domestic violence or homicide has increased, and many have gone missing. In order to find them, and other missing persons, Alanna Nulph of the Yurok learned to be a drone pilot.
|
|
|
“My dad and his friend had been flying drones for some time, and they gave me the idea that we can be more technically advanced when we search. This is an extreme rural area with some of the harshest landscapes for searching on foot, so any technology we can get is welcome,” Nulph told Native News Online. “I saw the struggle we had with law enforcement, with jurisdictional boundaries, and with getting resources in for searching on the reservation.”
|
|
|
|
|
As much as the drone itself, having a pilot with the Yurok tribe means a smoother operation coordinating between various overlapping and incomplete jurisdictions. It’s a small, four-rotored way to make the lands safer for all who traverse them.
|
|
|
|
|
|
|
Rebel Reliance: Part II
|
|
When the Syrian civil war came to Aleppo, it was hardly the first conflict to define the ancient city. Aleppo's Citadel alone hosts traces of fortifications built over four millennia, though the citadel has suffered damage, like the rest of the city, in recent years from the ongoing civil war. Mapping the damage of war is a simultaneously important and difficult task and often one at best tertiary to the survival efforts of those living in a besieged city.
Unlike besiegers of Aleppo in centuries past, modern observers can access satellite imagery of the city, taken before and over the course of the war. This kind of footage, collected, labeled, and analyzed, can offer insight into the patterns of war in a city, and in turn, serve as a starting point for useful research into urban warfare, civilian relief, and the shape of conflict.
In “Monitoring war destruction from space using machine learning,” authors Hannes Mueller, Andre Groeger, Jonathan Hersh, Andrea Matranga, and Joan Serrat outline a method of augmenting human-labeled destruction in satellite imagery with machine learning. The result allows an automated tool to parse out a city, compare points in time, and identify areas of heavy destruction.
At present, such labeling is done by hand and can be combined with reports from people on the ground, but is limited to the speed of human observation, labeling, and detection. “An automated building-damage classifier for use with satellite imagery, which has a low rate of false positives in unbalanced samples and allows tracking on-the-ground destruction in close to real-time,” the authors write, “would therefore be extremely valuable for the international community and academic researchers alike.”
To make the model, the researchers trained a neural network to spot features of destruction from heavy weapon attacks, like artillery and bombings, which can be seen in the rubble of collapsed buildings and in craters. Then the model also looked at what undestroyed areas of cities look like, making sure the model can distinguish between existing buildings and ones hit by heavy attacks.
Part of this model meant classifying images in patches of approximately 1,024 square meters (32 meters by 32 meters). This resolution allowed for damage to be precisely mapped, with the destruction of larger buildings covering multiple patches, while a bomb loose in a neighborhood might cover only one. In a map of Aleppo produced under this model, the destruction is plotted in red, and places free from damage are marked green.
The authors explained that “roads and parks are clearly visible as dark green (lowest destruction probability) or yellow patches. This is not only evidence of the power of our approach in picking up housing destruction, but it also shows how the classifier has learned that roads and parks are never destroyed buildings.”
Ultimately, conclude the authors, “reliable and updated data on destruction from war zones play an important role for humanitarian relief efforts, but also for human-rights monitoring, reconstruction initiatives, and media reporting, as well as the study of violent conflict in academic research. Studying this form of violence quantitatively, beyond specific case studies, is currently impossible due to the absence of systematic data.”
|
|
|
|
|
|
|
|
Olatunji Olaigbe delved into the complex cyber war occurring in parallel to Sudan’s civil war. It’s a conflict taking place in both physical and digital spaces, and, writes Olaigbe, “ordinary citizens suffer the effects of cyberwarfare more than the conflicting parties. They are shut off from communicating with their communities and loved ones in moments when it’s needed most. They’re also the primary targets of mostly paranoid surveillance.” This surveillance predates the recent outbreak of hostilities, but it gained new attention when the opposition in Greece called out their government for exporting special spyware technology to one of the armed factions in Sudan.
Matthew Bell tracked the ongoing protests in Tel Aviv as pro-democracy protesters in Israel maintain a steady effort to force concessions from the country’s ultra-reactionary government. While the government and its opposition engage in closed-door talks, the streets have become a contested place. Gaia Davidson, a 25-year-old graphic design student, told Bell about a dangerous standoff a month ago: “Suddenly, a group of about 50 very young people, all dressed in black and with black covers on their faces, runs toward us screaming. We run away and we hide behind buildings, and they run toward us and they start throwing bottles at us.”
Orla Barry reported on the plight of Romanian farmers. From the start of Russia’s February 2022 invasion, Romania has been a stalwart supporter of its neighbor Ukraine’s war effort against the invader. But that support has meant some hardship for local farmers, whose crops cannot compete with the flood of Ukrainian surplus diverted overland through Europe after safe transit routes out of the Black Sea were lost. “As Romanians, we will never give up on our commitment to helping the Ukrainian people in terms of casualties and resources for fighting the war. But the trade is the trade and the war is the war,” Romanian agricultural consultant Cezar Gheorghe told Barry.
|
|
|
|
|
|
|
|
|
Critical State is written by Kelsey D. Atherton with Inkstick Media.
The World is a weekday public radio show and podcast on global issues, news and insights from PRX and GBH.
With an online magazine and podcast featuring a diversity of expert voices, Inkstick Media is “foreign policy for the rest of us.”
Critical State is made possible in part by the Carnegie Corporation of New York.
|
|
|
|
|
|