2018
July
02
Monday

Monitor Daily Podcast

July 02, 2018
Loading the player...
Mark Sappenfield
Senior global correspondent

Earlier this year, a reporter went to Denmark to discover why the United Nations judged it the happiest country on earth. The answer she found: trust. “If we agree on something, you would live up to that,” one Dane told her. Put simply, Danes have faith in one another.

Can that trust be expanded? An influx of migrants from Muslim countries has brought people who did not grow up riding bicycles everywhere or meticulously tending to immaculate gardens – in other words, being “Danish.” These newcomers also have other traditions, such as headscarves for women. And while crime has fallen broadly in Denmark, organized crime, drug-related offenses, and crime against public officials have increased, a US State Department report notes. The uptick has stoked fears.

So Denmark is rolling out new “ghetto laws.” One requires that children in high-immigrant areas be separated from their families for at least 25 hours a week for instruction in Danish values, as The New York Times reports. Another could double punishment for crimes in “ghettos.”

Denmark’s struggle is the West’s struggle. On one hand, that struggle can be cast as a battle to maintain treasured national traditions and values against onrushing demographic tides. But there’s a different view, too. The larger challenge of migration is the struggle to build a bigger “us” – to find a trust that extends beyond ethnic identities or spots on a map. Denmark has become a part of this test. To the degree that it can find a more universal basis for its trust, it can share its happiness with the world.  

Here are our five stories for the day, including a glimpse at the meaning of protests in Iran, a message for foreign students in the United States, and the latest installment in the Monitor’s solutions-journalism collaboration. 


You've read  of  free articles. Subscribe to continue.

Today’s stories

And why we wrote them

Criticize allies then make friendly overtures to a dictator? President Trump did it once before. If he does it again this month with NATO and Russia, it would offer the clearest signal yet of his opposition to a united Europe.

Iranian Labor News Agency/AP
Protesters chanting slogans swarmed Tehran's Grand Bazaar June 25, news agencies reported, and forced shopkeepers to close their stalls in apparent anger over Iran's troubled economy. Similar demonstrations rocked the country months ago.

We’ve all heard a lot about the divisions troubling Western democracies. Turns out, Iran is facing the same problem. Recent protests show how hard it has been for hard-liners and reformers to work together.

Daniel Becerril/Reuters
Migrant children from Honduras and Mexico play at the Senda de Vida migrant shelter in Reynosa, in Mexico’s Tamaulipas state, last month.

Children separated from their families at the border have been through a trying ordeal. But it doesn’t need to define who they grow up to be.

Foreign students’ interest in studying at US colleges is cooling. In response, colleges are going to new lengths to make sure students from overseas feel welcome. 

SOURCE:

EAB/Royall & Company: Effect of the Current Political Environment on International Student Enrollment, 2017

|
Jacob Turcotte/Staff

Global voices

Worldwide reports on progress

As part of our continuing solutions-journalism collaboration with newspapers worldwide, here's a story from Thailand about opening up new doors for young people – by helping them return to the farm.  


The Monitor's View

Reuters
The "pop.up next" concept by Audi, Airbus, and Italdsign, an electric driver-less autonomous vehicle with vertical take-off and landing, is pictured during the Viva Tech start-up and technology summit in Paris May 25.

This year marks exactly two centuries since the publication of “Frankenstein; or, The Modern Prometheus,” by Mary Shelley. Even before the invention of the electric light bulb, the author produced a remarkable work of speculative fiction that would foreshadow myriad ethical questions to be spawned by technologies yet to come.

Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is intelligence, identity, or consciousness? What makes humans humans?"

What is being called artificial general intelligence, machines that would mimic the way humans think, continues to elude scientists. Yet humans remain fascinated by the idea of robots that would look, move, and respond like humans, similar to those recently depicted on popular sci-fi TV series such as “Westworld” and “Humans.”

Just how people think is still far too complex to be understood, let alone reproduced, says David Eagleman, a Stanford University neuroscientist and science adviser for “Westworld.” “[W]e are just in a situation where there are no good theories explaining what consciousness actually is and how you could ever build a machine to get there.”

But that doesn’t mean crucial ethical issues involving AI aren’t at hand. Less sophisticated AI is already embedded in everyday life, from the (sometimes) helpful voice assistants like Alexa to Facebook tagging photos for users.

Besides much-talked-about vehicles that will drive themselves, AI is crunching huge amounts of data to suggest whether a prisoner would likely return to crime if released; algorithms exist that can choose the best applicants for a job or the right classes for a student to take (not to mention defeat a human at chess or win a debate).

All these systems contain the possibility of misuse. One viral video shows an automatic soap dispenser in a public bathroom that only dispenses soap onto white hands. Apparently the design team forgot to calibrate the sensor so that it recognized hands with darker skin tones.

While that foul-up might seem frivolous, or even humorous (though perhaps not to those being denied soap), it illustrates a more serious problem: If an employer looks for new hires, for example, using an algorithm based on the characteristics of its presently all-white or all-male staff, might the algorithm recommend only people with those characteristics?

The coming use of autonomous vehicles poses gnarly ethical questions. Human drivers sometimes must make split-second decisions. Their reactions may be a complex combination of instant reflexes, input from past driving experiences, and what their eyes and ears tell them in that moment. 

AI "vision" today is not nearly as sophisticated as that of humans. And to anticipate every imaginable driving situation is a difficult programming problem. One possible technique may be to survey human drivers to ask what they would do in myriad driving situations. Another would be to analyze accidents involving AI after the fact, to understand how it proved deficient and fix the problem.

The hope is that AI-driven vehicles will become far better drivers than humans, saving thousands of human injuries and deaths.

But whenever decisions are based on masses of data, “you quickly get into a lot of ethical questions,” notes Tan Kiat How, chief executive of the Info-communications Media Development Authority, a Singapore-based agency that is helping the government develop a voluntary code for the ethical use of AI.

Along with Singapore, other governments and mega-corporations are beginning to establish their own guidelines. Britain is setting up a data ethics center. India released its AI ethics strategy this spring. Worldwide, high schools and colleges could seriously commit to teaching students in AI courses about the ethical issues this new technology raises.

On June 7 Google pledged to not “design or deploy AI” that would cause “overall harm,” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged to not deploy AI whose use would violate international laws or human rights.

While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be “explainable, transparent, and fair,” as S. Iswaran, Singapore’s minister for communications and information, put it recently.

To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s unleashed monster.


A Christian Science Perspective

About this feature

Each weekday, the Monitor includes one clearly labeled religious article offering spiritual insight on contemporary issues, including the news. The publication – in its various forms – is produced for anyone who cares about the progress of the human endeavor around the world and seeks news reported with compassion, intelligence, and an essentially constructive lens. For many, that caring has religious roots. For many, it does not. The Monitor has always embraced both audiences. The Monitor is owned by a church – The First Church of Christ, Scientist, in Boston – whose founder was concerned with both the state of the world and the quality of available news.

Inspired by the story of a slave claiming his freedom, today’s contributor took a spiritual stand for her own liberation from chronic pain.


A message of love

Noah Berger/AP
Smoke from a wildfire rose above sunflowers in Citrona, Calif., July 1. Evacuations were ordered as hot, dry winds fueled the wildfire burning out of control in rural northern California, sending a stream of smoke some 75 miles south into the San Francisco Bay Area. Cal Fire says the fire, which started June 30 near the town of Guinda, had reached 32,500 acres by the next evening.
( The illustrations in today’s Monitor Daily are by Jacob Turcotte. )

A look ahead

Thank you for joining us today. Tomorrow, correspondent Doug Struck takes a look at what it means to be patriotic in America today. 

More issues

2018
July
02
Monday

Give us your feedback

We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.