The United States “won’t tolerate” China’s effective ban on purchases of Micron Technology MU.O memory chips and is working closely with allies to address such “economic coercion,” U.S. Commerce Secretary Gina Raimondo said Saturday.
Raimondo told a news conference after a meeting of trade ministers in the U.S.-led Indo-Pacific Economic Framework talks that the U.S. “firmly opposes” China’s actions against Micron.
These “target a single U.S. company without any basis in fact, and we see it as plain and simple economic coercion and we won’t tolerate it, nor do we think it will be successful.”
China’s cyberspace regulator said May 21 that Micron, the biggest U.S. memory chip maker, had failed its network security review and that it would block operators of key infrastructure from buying from the company, prompting it to predict a revenue reduction.
The move came a day after leaders of the G7 industrial democracies agreed to new initiatives to push back against economic coercion by China — a decision noted by Raimondo.
“As we said at the G7 and as we have said consistently, we are closely engaging with partners addressing this specific challenge and all challenges related to China’s non-market practices.”
Raimondo also raised the Micron issue in a meeting Thursday with China’s Commerce Minister, Wang Wentao.
She also said the IPEF agreement on supply chains and other pillars of the talks would be consistent with U.S. investments in the $52 billion CHIPS Act to foster semiconductor production in the United States.
“The investments in the CHIPS Act are to strengthen and bolster our domestic production of semiconductors. Having said that, we welcome participation from companies that are in IPEF countries, you know, so we expect that companies from Japan, Korea, Singapore, etc, will participate in the CHIPS Act funding,” Raimondo said.
China and South Korea have agreed to strengthen dialog and cooperation on semiconductor industry supply chains, amid broader global concerns over chip supplies, sanctions and national security, China’s commerce minister said.
Wang Wentao met with South Korean Trade Minister Ahn Duk-geun on the sidelines of the Asia-Pacific Economic Cooperation (APEC) conference in Detroit, which ended Friday.
They exchanged views on maintaining the stability of the industrial supply chain and strengthening cooperation in bilateral, regional and multilateral fields, according to a statement from the Chinese Ministry of Commerce on Saturday.
Wang also said that China is willing to work with South Korea to deepen trade ties and investment cooperation.
However, a South Korean statement on the same meeting did not mention chips, instead saying the country’s trade minister had asked China to stabilize the supply of key raw materials — and asked for a predictable business environment for South Korean companies in China.
“The South Korean side expressed that communication is needed between working-level officials over all industries,” not just for semiconductors, a source with knowledge of the matter told Reuters.
The source declined to be identified because they were not authorized to speak to the media.
South Korea is in the crosshairs of a tit-for-tat row between the United States and China over semiconductors.
China’s cyberspace regulator said last week that Micron had failed its network security review and that it would block operators of key infrastructure from buying from the company.
The U.S. has pushed for countries to limit China’s access to advanced chips, citing a host of reasons including national security.
About 40% South Korea’s chip exports go to China, according to trade ministry data, while U.S. technology and equipment are necessary for South Korean chipmakers Samsung Electronics and SK Hynix.
As concerns grow over increasingly powerful artificial intelligence systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI.
Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees, and other aspects of our financial lives. AI also affects hiring, housing and working conditions.
Ben Winters, senior counsel for the Electronic Privacy Information Center, said a joint statement on enforcement released by federal agencies last month was a positive first step.
“There’s this narrative that AI is entirely unregulated, which is not really true,” he said. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision. This is our opinion on this. We’re watching.’”
In the past year, the Consumer Finance Protection Bureau said it has fined banks over mismanaged automated systems that resulted in wrongful home foreclosures, car repossessions and lost benefit payments, after the institutions relied on new technology and faulty algorithms.
There will be no “AI exemptions” to consumer protection, regulators say, pointing to these enforcement actions as examples.
Consumer Finance Protection Bureau Director Rohit Chopra said the agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges” and that the agency is continuing to identify potentially illegal activity.
Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice, as well as the CFPB, all say they’re directing resources and staff to take aim at new tech and identify negative ways it could affect consumers’ lives.
“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra said. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.”
Under the Fair Credit Reporting Act and Equal Credit Opportunity Act, for example, financial providers have a legal obligation to explain any adverse credit decision. Those regulations likewise apply to decisions made about housing and employment. Where AI make decisions in ways that are too opaque to explain, regulators say the algorithms shouldn’t be used.
“I think there was a sense that, ‘Oh, let’s just give it to the robots and there will be no more discrimination,’” Chopra said. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.”
EEOC Chair Charlotte Burrows said there will be enforcement against AI hiring technology that screens out job applicants with disabilities, for example, as well as so-called “bossware” that illegally surveils workers.
Burrows also described ways that algorithms might dictate how and when employees can work in ways that would violate existing law.
“If you need a break because you have a disability or perhaps you’re pregnant, you need a break,” she said. “The algorithm doesn’t necessarily take into account that accommodation. Those are things that we are looking closely at. … I want to be clear that while we recognize that the technology is evolving, the underlying message here is the laws still apply and we do have tools to enforce.”
OpenAI’s top lawyer, at a conference this month, suggested an industry-led approach to regulation.
“I think it first starts with trying to get to some kind of standards,” Jason Kwon, OpenAI’s general counsel, told a tech summit in Washington hosted by software industry group BSA. “Those could start with industry standards and some sort of coalescing around that. And decisions about whether or not to make those compulsory, and also then what’s the process for updating them, those things are probably fertile ground for more conversation.”
Sam Altman, the head of OpenAI, which makes ChatGPT, said government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems, suggesting the formation of a U.S. or global agency to license and regulate the technology.
While there’s no immediate sign that Congress will craft sweeping new AI rules as European lawmakers are doing, societal concerns brought Altman and other tech CEOs to the White House this month to answer hard questions about the implications of these tools.
Global investment in clean energy production in 2023 will be significantly larger than investment in fossil fuel-based energy generation, and for the first time, more money will be invested in solar energy than in the oil sector, according to a report issued by the International Energy Agency on Thursday.
The report, World Energy Investment 2023, finds that globally, $2.8 trillion will be invested in energy in 2023, including production, transmission and storage. Of that amount, $1.7 trillion will be invested in clean technology, which the IEA defines as “renewables, electric vehicles, nuclear power, grids, storage, low-emissions fuels, efficiency improvements and heat pumps.”
The estimate for clean energy for 2023 reflects a 24% increase over that for 2021 in a sector expected to continue growing for the foreseeable future, as governments worldwide attempt to meet the internationally agreed-on target of net-zero carbon emissions by 2050. Achieving that goal would allow the world to avoid some of the worst effects of global warming.
While the report shows that the road to a zero-carbon future is long, it also offers the possibility that key interim goals, including total investment targets for 2030, remain achievable.
“Clean energy is moving fast — faster than many people realize,” IEA Executive Director Fatih Birol said in a statement accompanying the report. “This is clear in the investment trends, where clean technologies are pulling away from fossil fuels. For every dollar invested in fossil fuels, about 1.7 dollars are now going into clean energy. Five years ago, this ratio was 1-to-1. One shining example is investment in solar, which is set to overtake the amount of investment going into oil production for the first time.”
The report estimates that in 2023, total global investment in solar power technology will be $382 billion, compared with $371 billion invested in oil production. In 2013, the amount invested in oil production was $636 billion, five times larger than the $127 billion invested in solar.
No pandemic slowdown
Nat Bullard, an energy analyst and a senior contributor to BloombergNEF, which provides strategic research on the transition to a low-carbon economy, told VOA that the IEA report was clarifying after a period of complexity in the energy markets.
“We have had, in succession and overlapping, a pandemic, a supply chain crunch, inflation and a very, very large war all going on at once,” he said. “They’ve made long-term trends hard to see because you’ve had a lot of near-term variability.
“What the report highlights, and the IEA has generally been very clear, is that if you look on an evidence basis, during COVID we did not actually see any deceleration in interest in energy transition,” he said. “In the years after that, supply chain disruptions, high prices for hydrocarbons and big conflicts have actually encouraged investment.”
Not evenly distributed
China is far and away the largest single investor in clean energy, plunging $184 billion into the selector in 2022. Taken as a whole, the European Union invested $154 billion in clean energy in 2022.
The U.S. trailed both, with $97 billion invested last year. However, the amount spent by the U.S. in 2023 will likely be significantly larger thanks to passage of legislation last year containing funding for clean energy generation.
Rounding out the top five, Japan invested $28 billion in clean energy; India, $19 billion.
While rising investment in renewable power is good news in the climate-change fight, the IEA points out that it is heavily tilted toward large developed economies, with poorer countries and the Global South, in particular, seeing relatively little investment.
The entire continent of Africa, for example, saw just $10 billion in clean energy investment in 2022.
Electric vehicles and batteries
Two of the fastest-growing segments of the clean energy investment space are electric vehicles (EVs) and batteries that store power generated by clean energy technologies.
In 2023, the IEA estimates that $129 billion will be invested in electric vehicle technology, more than nine times the $14 billion invested just five years earlier. Battery storage will be the target of $37 billion in investment this year, over seven times the $5 billion invested in the sector in 2018.
In both segments, China is leading the way. In 2022, the entire world’s production capacity for lithium-ion batteries, the type most commonly used in EVs, stood at 1.57 terawatt hours. China accounted for 76% of that capacity. By 2030, according to the IEA, that capacity will have ballooned to 6.79 TWh, but China’s dominance will continue, accounting for 68% of the total.
Fossil fuels still growing
While renewables may be attracting more investment dollars than fossil fuels in 2023, the IEA reported that consumption of fossil fuels will continue to rise this year.
Meeting the net-zero goal in 2050 requires a slowing of investment in fossil fuels technology, according to the IEA. According to the report, more than $1 trillion will be invested in fossil fuels in 2023. To meet the agency’s benchmark for progress, that figure would have to be reduced by more than half by 2030.
Conversely, to remain on track, investment in clean energy must continue to grow. The agency estimates that to meet the benchmark for 2030, annual investment will have to grow from $1.7 trillion this year to $4.6 trillion in 2030.
To reach that goal, clean energy spending would have to grow by about 15% every year between now and 2030, somewhat higher than the 11.4% annual growth the sector has experienced over the past three years.
State-sponsored Chinese hackers have infiltrated critical U.S. infrastructure networks, the United States, its Western allies and Microsoft said Wednesday while warning that similar espionage attacks could be occurring globally.
Microsoft highlighted Guam, a U.S. territory in the Pacific Ocean with a vital military outpost, as one of the targets, but said “malicious” activity had also been detected elsewhere in the United States.
The stealthy attack — carried out by a China-sponsored actor dubbed “Volt Typhoon” since mid-2021 — enabled long-term espionage and was likely aimed at hampering the United States if there was conflict in the region, it said.
“Microsoft assesses with moderate confidence that this Volt Typhoon campaign is pursuing development of capabilities that could disrupt critical communications infrastructure between the United States and Asia region during future crises,” the statement said.
“In this campaign, the affected organizations span the communications, manufacturing, utility, transportation, construction, maritime, government, information technology, and education sectors.”
Microsoft’s statement coincided with an advisory released by U.S., Australian, Canadian, New Zealand and British authorities warning that the hacking was likely occurring globally.
“This activity affects networks across US critical infrastructure sectors, and the authoring agencies believe the actor could apply the same techniques against these and other sectors worldwide,” they said.
‘Living off the land’
The United States and its allies said the activities involved “living off the land” tactics, which take advantage of built-in network tools to blend in with normal Windows systems.
It warned that the hacking could then incorporate legitimate system administration commands that appear “benign”.
Microsoft said the Volt Typhoon attack tried to blend into normal network activity by routing traffic through compromised small office and home office network equipment, including routers, firewalls and VPN hardware.
“They have also been observed using custom versions of open-source tools,” Microsoft said.
Microsoft and the security agencies released guidelines for organizations to try to detect and counter the hacking.
“It’s what I would term a low and slow cyber activity,” said Alastair MacGibbon, chief strategy officer at Australia’s CyberCX and a former head of the Australian Cyber Security Centre.
“This is someone wearing a camouflage vest and carrying a sniper rifle. You don’t see them, they’re not there,” he told AFP.
“When you think about something that can really cause catastrophic harm, it is someone with intent who takes time to get into systems.”
Once inside, the cyber attackers can steal information, he said. “But it also gives you the ability to carry out destructive acts at a later stage.”
A number of other governments had found similar activity since the Volt Typhoon alert was issued, said Robert Potter, co-founder of Australian cybersecurity firm Internet 2.0.
“I am not sure how communications infrastructure would be at risk from these attacks because those networks are highly resilient and difficult to bring down for more than small intervals,” Potter told AFP.
“However, the ongoing threat from China-based APT (advanced persistent threat) groups is real.”
The director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said China had been stealing intellectual property and data worldwide for years.
“Today’s advisory, put out in conjunction with our U.S. and international partners, reflects how China is using highly sophisticated means to target our nation’s critical infrastructure,” Easterly said.
China offered no immediate response to the allegations. But it routinely denies carrying out state-sponsored cyber-attacks.
China in turn regularly accuses the United States of cyber espionage.
Beijing’s restrictions on American chipmaker Micron in retaliation to sweeping US chip curbs mark a major step up in its response to Washington’s pressure and could open the door for further measures in the geopolitical standoff, analysts say.
But they warned President Xi Jinping’s ability to raise the stakes will be limited as he battles to re-energize the world’s number two economy while it struggles to recover from years of zero-Covid-imposed inertia.
China on Sunday banned the use of Micron’s chips in critical infrastructure projects, which Beijing said posed “major network security risks” that could affect “national security”.
Washington expressed “serious concerns” over the ruling that came just as leaders of the world’s seven richest nations (G7) signed a statement urging Beijing to end “economic coercion”.
The move marked a significant shift in China’s response to US measures that have targeted the country’s technology sector, with Gary Ng, a senior economist at Natixis who specializes in the global chip trade, calling it “a landmark case”.
He emphasized it was China’s first cybersecurity probe into a foreign company since tighter rules were announced in 2021, and a rare instance when the scope of such reviews was expanded to include national security concerns.
“I wouldn’t be surprised if regulators used these reviews as a tool for retaliation in future” when faced with other geopolitical issues, he said.
Emily Weinstein, a research fellow at Georgetown University specializing in the US-China tech rivalry, added that the definition of what fell under “critical information infrastructure” was very broad — ranging from online government services and defense to healthcare and water conservation.
“Technically that could mean that anything qualifies,” she said.
“China has consistently found national security or other reasons to create protectionist barriers” including mandatory technology transfer agreements, which require companies to store all data locally and requirements for foreign entities to have joint ventures with local partners in several sectors.
‘Fuel to this fire’
China began an investigation into Micron in late March, five months after the US unveiled sweeping curbs aimed at cutting off Beijing’s access to high-end chips, chipmaking equipment and software used to design semiconductors.
“This is clearly part of a tit-for-tat retaliation for what Beijing perceives as Washington’s support of Micron and the US semiconductor industry,” said Paul Triolo, a China tech expert at consultancy Albright Stonebridge.
Micron was singled out to make a political statement, Triolo said, adding that previous cybersecurity reviews of domestic firms, such as ride-hailing app Didi, focused on data instead of broadening the scope to include national security.
Washington has banned Chinese chipmakers including Micron rival Yangtze Memory Technologies.
The announcement came as the G7 nations said they would move to “de-risk, not decouple” from China, while Washington pressures allies to unite in restricting chip equipment exports to China.
“The strong statement from G7 may have added fuel to this fire,” Ng said.
However, Xi’s desire to combat what he sees as US hegemony will need to be balanced against the impact such measures would have on the economy.
According to analysts, Micron — one of the US’s largest memory chipmakers — was an easy target because its semiconductors could be replaced by products from South Korea’s SK Hynix and Samsung.
But restrictions against other US firms such as Intel and Qualcomm would be much harder to deal with because their technologies are used in consumer goods, including smartphones, that are made in the country and shipped abroad.
Betting on South Korea
“The approach of limiting US firms like Micron intends to send a signal that Beijing is willing to bear some pain as it contests with the US,” Ja Ian Chong, an associate professor of political science at the National University of Singapore, said.
“But Beijing is quite careful to limit costs to itself,” he said, according to Bloomberg News.
The ban will come down particularly hard on companies offering cloud services or data centers because they use hardware that requires high-end memory chips, according to Toby Zhu, an analyst at market research firm Canalys.
He told AFP that Micron’s consumer goods products are “completely replaceable” by South Korean and domestic memory chip suppliers.
And Triolo said Beijing was “betting on switching to South Korean suppliers”.
However, the White House last month urged South Korean chipmakers not to export to China to fill any gap left by a ban on US semiconductor imports.
The Netherlands and Japan have already announced their own restrictions on chip exports, following requests from Washington.
Ng added: “China has been quite cautious not to retaliate too much… because Beijing can’t ramp up domestic capacity quickly to match any shortfall.”
Microsoft Corp. said on Wednesday it had uncovered malicious activity by a state-sponsored actor based in China aimed at critical infrastructure organizations in Guam and the United States.
Microsoft said it assessed with “moderate confidence” that this Volt Typhoon campaign “is pursuing development of capabilities that could disrupt critical communications infrastructure between the United States and Asia region during future crises.”
Volt Typhoon has been active since mid-2021 and has targeted critical infrastructure organizations in Guam and elsewhere in the United States, the company said.
Guam is home to major U.S. military facilities, including the Andersen Air Force Base, which would be key to responding to any conflict in the Asia-Pacific region.
Microsoft said it had notified targeted or compromised customers and provided them with information.
The Chinese embassy in Washington did not immediately respond to a Reuters request for comment.
Apple Inc on Tuesday said it has entered a multi-billion-dollar deal with chipmaker Broadcom Inc. to use chips made in the United States.
Under the multi-year deal, Broadcom will develop 5G radio frequency components with Apple that will be designed and built in several U.S. facilities, including Fort Collins, Colorado, where Broadcom has a major factory, Apple said.
Broadcom were up 2.2% after the announcement, hitting a record high. The chipmaker is already a major supplier of wireless components to Apple, with about one fifth of its revenue coming from the iPhone maker in its two most recent fiscal year.
Apple has been steadily diversifying its supply chains, building more products in India and Vietnam and saying that it will source chips from a new Taiwan Semiconductor Manufacturing Co plant under construction in Arizona.
SEE ALSO: A related video by VOA correspondent Michelle Quinn
The two companies did not disclose the size of the deal, with Broadcom saying only that the new agreements require it to allocate Apple “sufficient manufacturing capacity and other resources to make these products.”
Broadcom and Apple previously had a three-year, $15 billion agreement that Bernstein analyst Stacy Rasgon said was set to expire in June. He said the development was positive for Broadcom, despite the fact that the two firms did not give a time frame for how long the work will last.
“It’s good that it removes that overhang,” Rasgon said. “Broadcom has existed over the years with a number of these long-term agreements with Apple. Sometimes they have them and sometimes they don’t.”
Apple said it will tap Broadcom for what are known as film bulk acoustic resonator (FBAR) chips. The FBAR chips are part of a radio-frequency system that helps iPhones and other Apple devices connect to mobile data networks.
“All of Apple’s products depend on technology engineered and built here in the United States, and we’ll continue to deepen our investments in the U.S. economy because we have an unshakable belief in America’s future,” Apple CEO Tim Cook said in a statement.
Apple said it currently supports more than 1,100 jobs in Broadcom’s Fort Collins FBAR filter manufacturing facility.
TikTok on Monday filed suit in U.S. federal court to stop the northern state of Montana from implementing an overall ban on the video-sharing app.
The unprecedented ban, set to start in 2024, violates the constitutionally protected right to free speech, TikTok argued in the suit.
“We believe our legal challenge will prevail based on an exceedingly strong set of precedents and facts,” a TikTok spokesperson told AFP.
Montana Governor Greg Gianforte signed the prohibition into law on May 17.
Gianforte said on Twitter that he endorsed the ban in order to “protect Montanans’ personal and private data from the Chinese Communist Party.”
“The state has enacted these extraordinary and unprecedented measures based on nothing more than unfounded speculation,” TikTok contended in its lawsuit.
Five TikTok users last week filed a suit of their own, calling on a federal court to overturn Montana’s ban on the app, arguing that it violates their free speech rights.
Both suits filed against Montana argue the state is trying to exercise national security power that only the federal government can wield and is violating free speech rights in the process.
TikTok called on the federal court to declare the Montana ban on its app unconstitutional and block the state from ever putting it into effect.
“Montana can no more ban its residents from viewing or posting to TikTok than it could ban the Wall Street Journal because of who owns it or the ideas it publishes,” the lawsuit filed by TikTok users contends.
The app is owned by Chinese firm ByteDance and is accused by a wide swath of U.S. politicians of being under the tutelage of the Chinese government and a tool of espionage by Beijing, something the company furiously denies.
Montana became the first U.S. state to ban TikTok, with the law set to take effect next year as debate escalates over the impact and security of the popular video app.
A matter of law
The prohibition will serve as a legal test for a national ban of the platform, something that lawmakers in Washington are increasingly calling for.
The Montana ban makes it a violation each time “a user accesses TikTok, is offered the ability to access TikTok, or is offered the ability to download TikTok.”
Each violation is punishable by a $10,000 fine every day it takes place.
Under the law, Apple and Google will have to remove TikTok from their app stores and companies will face possible daily fines.
The prohibition will take effect in 2024 but would be voided if TikTok is acquired by a company incorporated in a country not designated by the United States as a foreign adversary, the law reads.
The cases should move quickly in court, since they center on points of law that don’t require lots of evidence to be gathered, according to University of Richmond law professor Carl Tobias.
“There are very compelling constitutional arguments that favor the plaintiffs,” Tobias said.
“First is free speech, and second is if the ban is justified by national security, that is a matter for the federal government not any individual state.”
The law is the latest skirmish in duels between TikTok and many western governments, with the app already banned on government devices in the United States, Canada and several countries in Europe.
After an anonymous TikTok user created a song using artificial intelligence that fooled many into thinking it was made by pop stars, experts say the music industry will have to decide how to handle AI music. Deana Mitchell has the story.
Saudi Arabia’s first astronauts in decades rocketed toward the International Space Station on a chartered multimillion-dollar flight Sunday.
SpaceX launched the ticket-holding crew, led by a retired NASA astronaut now working for the company that arranged the trip from Kennedy Space Center. Also on board: a U.S. businessman who now owns a sports car racing team.
The four should reach the space station in their capsule Monday morning; they’ll spend just more than a week there before returning home with a splashdown off the Florida coast.
Sponsored by the Saudi Arabian government, Rayyanah Barnawi, a stem cell researcher, became the first woman from the kingdom to go to space. She was joined by Ali al-Qarni, a fighter pilot with the Royal Saudi Air Force.
They’re the first from their country to ride a rocket since a Saudi prince launched aboard shuttle Discovery in 1985. In a quirk of timing, they’ll be greeted at the station by an astronaut from the United Arab Emirates.
“Hello from outer space! It feels amazing to be viewing Earth from this capsule,” Barnawi said after settling into orbit.
Added al-Qarni: “As I look outside into space, I can’t help but think this is just the beginning of a great journey for all of us.”
Rounding out the visiting crew: Knoxville, Tennessee’s John Shoffner, former driver and owner of a sports car racing team that competes in Europe, and chaperone Peggy Whitson, the station’s first female commander who holds the U.S. record for most accumulated time in space: 665 days and counting.
“It was a phenomenal ride,” Whitson said after reaching orbit. Her crewmates clapped their hands in joy.
It’s the second private flight to the space station organized by Houston-based Axiom Space. The first was last year by three businessmen, with another retired NASA astronaut. The company plans to start adding its own rooms to the station in another few years, eventually removing them to form a stand-alone outpost available for hire.
Axiom won’t say how much Shoffner and Saudi Arabia are paying for the planned 10-day mission. The company had previously cited a ticket price of $55 million each.
NASA’s latest price list shows per-person, per-day charges of $2,000 for food and up to $1,500 for sleeping bags and other gear. Need to get your stuff to the space station in advance? Figure roughly $10,000 per pound ($20,000 per kilogram), the same fee for trashing it afterward. Need your items back intact? Double the price.
At least the email and video links are free.
The guests will have access to most of the station as they conduct experiments, photograph Earth and chat with schoolchildren back home, demonstrating how kites fly in space when attached to a fan.
After decades of shunning space tourism, NASA now embraces it with two private missions planned a year. The Russian Space Agency has been doing it, off and on, for decades.
“Our job is to expand what we do in low-Earth orbit across the globe,” said NASA’s space station program manager Joel Montalbano.
SpaceX’s first-stage booster landed back at Cape Canaveral eight minutes after liftoff — a special treat for the launch day crowd, which included about 60 Saudis.
“It was a very, very exciting day,” said Axiom’s Matt Ondler.
Weather-related disasters have surged over the past 50 years, causing swelling economic damage even as early warning systems have meant dramatically fewer deaths, the United Nations said Monday.
Extreme weather, climate and water-related events caused 11,778 reported disasters between 1970 and 2021, new figures from the U.N.’s World Meteorological Organization (WMO) show.
Those disasters killed just more than 2 million people and caused $4.3 trillion in economic losses.
“The most vulnerable communities unfortunately bear the brunt of weather, climate and water-related hazards,” WMO chief Petteri Taalas said in a statement.
The report found that more than 90% of reported deaths worldwide due to disasters in the 51-year period occurred in developing countries.
But the agency also said improved early warning systems and coordinated disaster management had significantly reduced the human casualty toll.
WMO pointed out in a report issued two years ago covering disaster-linked deaths and losses between 1970 and 2019, that at the beginning of the period the world was seeing more than 50,000 such deaths each year.
By the 2010s, the disaster death toll had dropped below 20,000 annually.
And in its update of that report, WMO said Monday that 22,608 disaster deaths were recorded globally in 2020 and 2021 combined.
‘Early warnings save lives’
Cyclone Mocha, which wreaked havoc in Myanmar and Bangladesh last week, exemplifies this, Taalas said.
Mocha “caused widespread devastation … impacting the poorest of the poor,” he said.
But while Myanmar’s junta has put the death toll from the cyclone at 145, Taalas pointed out that during similar disasters in the past, “both Myanmar and Bangladesh suffered death tolls of tens and even hundreds of thousands of people.”
“Thanks to early warnings and disaster management, these catastrophic mortality rates are now thankfully history. Early warnings save lives,” he added.
The U.N. has launched a plan to ensure all nations are covered by disaster early warning systems by the end of 2027.
Endorsing that plan figures among the top strategic priorities during a meeting of WMO’s decision-making body, the World Meteorological Congress, which opens Monday.
To date, only half of countries have such systems in place.
Surging economic losses
WMO meanwhile warned that while deaths have plunged, the economic losses incurred when weather, climate and water extremes hit have soared.
The agency previously recorded economic losses that increased sevenfold between 1970 and 2019, rising from $49 million per day during the first decade to $383 million per day in the final one.
Wealthy countries have been hardest hit by far in monetary terms.
The United States alone incurred $1.7 trillion in losses, or 39% of the economic losses globally from disasters since 1970.
But while the dollar figures on losses suffered in poorer nations were not particularly high, they were far higher in relation to the size of their economies, WMO noted.
Developed nations accounted for more than 60% of losses from weather, climate and water disasters, but in more than four-fifths of cases, the economic losses were equivalent to less than 0.1% of gross domestic product (GDP).
And no disasters saw reported economic losses greater than 3.5% of the respective GDPs.
By comparison, in 7% of the disasters that hit the world’s least developed countries, losses equivalent to more than 5% of their GDP were reported, with several disasters causing losses equivalent to nearly a third of GDP.
And for small island developing states, one-fifth of disasters saw economic losses of more than 5% of GDP, with some causing economic losses of 100 percent.
SpaceX’s next private flight to the International Space Station awaited takeoff Sunday, weather and rocket permitting.
The passengers include Saudi Arabia’s first astronauts in decades, as well as a Tennessee businessman who started his own sports car racing team. They’ll be led by a retired NASA astronaut who now works for the company that arranged the 10-day trip.
It’s the second charter flight organized by Houston-based Axiom Space. The company would not say how much the latest tickets cost; it previously cited per-seat prices of $55 million.
With its Falcon rocket already on the pad, SpaceX targeted a liftoff late Sunday afternoon from NASA’s Kennedy Space Center. It’s the same spot where Saudi Arabia’s first astronaut, a prince, soared in 1985.
Representing the Saudi Arabian government this time are Rayyanah Barnawi, a stem cell researcher set to become the kingdom’s first woman in space, and Royal Saudi Air Force fighter pilot Ali al-Qarni.
Rounding out the crew: John Shoffner, the racecar buff; and Peggy Whitson, who holds the U.S. record for the most accumulated time in space at 665 days.
It’s been more than a decade since the end of the Iraq War. Much of the country still bears the scars of the U.S.-led invasion. But Iraqis today are working to clean up their country, and some have turned to technology for help. VOA’s Arash Arabasadi has more.
Stepping up a feud with Washington over technology and security, China’s government Sunday told users of computer equipment deemed sensitive to stop buying products from the biggest U.S. memory chipmaker, Micron Technology Inc.
Micron products have unspecified “serious network security risks” that pose hazards to China’s information infrastructure and affect national security, the Cyberspace Administration of China said on its website. Its six-sentence statement gave no details.
“Operators of critical information infrastructure in China should stop purchasing products from Micron Co.,” the agency said.
The United States, Europe and Japan are reducing Chinese access to advanced chipmaking and other technology they say might be used in weapons at a time when President Xi Jinping’s government has threatened to attack Taiwan and is increasingly assertive toward Japan and other neighbors.
Chinese officials have warned of unspecified consequences but appear to be struggling to find ways to retaliate without hurting China’s smartphone producers and other industries and efforts to develop its own processor chip suppliers.
An official review of Micron under China’s increasingly stringent information security laws was announced April 4, hours after Japan joined Washington in imposing restrictions on Chinese access to technology to make processor chips on security grounds.
Foreign companies have been rattled by police raids on two consulting firms, Bain & Co. and Capvision, and a due diligence firm, Mintz Group. Chinese authorities have declined to explain the raids but said foreign companies are obliged to obey the law.
Business groups and the U.S. government have appealed to authorities to explain newly expanded legal restrictions on information and how they will be enforced.
Sunday’s announcement appeared to try to reassure foreign companies.
“China firmly promotes high-level opening up to the outside world and, as long as it complies with Chinese laws and regulations, welcomes enterprises and various platform products and services from various countries to enter the Chinese market,” the cyberspace agency said.
Xi accused Washington in March of trying to block China’s development. He called on the public to “dare to fight.”
Despite that, Beijing has been slow to retaliate, possibly to avoid disrupting Chinese industries that assemble most of the world’s smartphones, tablet computers and other consumer electronics. They import more than $300 billion worth of foreign chips every year.
Beijing is pouring billions of dollars into trying to accelerate chip development and reduce the need for foreign technology. Chinese foundries can supply low-end chips used in autos and home appliances but can’t support smartphones, artificial intelligence and other advanced applications.
The conflict has prompted warnings the world might decouple or split into separate spheres with incompatible technology standards that mean computers, smartphones and other products from one region wouldn’t work in others. That would raise costs and might slow innovation.
U.S.-Chinese relations are at their lowest level in decades due to disputes over security, Beijing’s treatment of Hong Kong and Muslim ethnic minorities, territorial disputes and China’s multibillion-dollar trade surpluses.
The world must urgently assess the impact of generative artificial intelligence, G7 leaders said Saturday, announcing they will launch discussions this year on “responsible” use of the technology.
A working group will be set up to tackle issues from copyright to disinformation, the seven leading economies said in a final communique released during a summit in Hiroshima, Japan.
Text generation tools such as ChatGPT, image creators and music composed using AI have sparked delight, alarm and legal battles as creators accuse them of scraping material without permission.
Governments worldwide are under pressure to move quickly to mitigate the risks, with the chief executive of ChatGPT’s OpenAI telling U.S. lawmakers this week that regulating AI was essential.
“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors,” the G7 statement said.
“We task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner … for discussions on generative AI by the end of this year,” it said.
“These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”
The new working group will be organized in cooperation with the OECD group of developed countries and the Global Partnership on Artificial Intelligence (GPAI), the statement added.
On Tuesday, OpenAI CEO Sam Altman testified before a U.S. Senate panel and urged Congress to impose new rules on big tech.
He insisted that in time, generative AI developed by his company would one day “address some of humanity’s biggest challenges, like climate change and curing cancer.”
However, “we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.
European Parliament lawmakers this month also took a first step towards EU-wide regulation of ChatGPT and other AI systems.
The text is to be put to the full parliament next month for adoption before negotiations with EU member states on a final law.
“While rapid technological change has been strengthening societies and economies, the international governance of new digital technologies has not necessarily kept pace,” the G7 said.
For AI and other emerging technologies including immersive metaverses, “the governance of the digital economy should continue to be updated in line with our shared democratic values,” the group said.
Among others, these values include fairness, respect for privacy and “protection from online harassment, hate and abuse,” among others, it added.
The U.S. Supreme Court on Thursday refused to clear a path for victims of attacks by militant organizations to hold social media companies liable under a federal anti-terrorism law for failing to prevent the groups from using their platforms, handing a victory to Twitter.
The justices, in a unanimous decision, reversed a lower court’s ruling that had revived a lawsuit against Twitter by the American relatives of Nawras Alassaf, a Jordanian man killed in a 2017 attack during New Year’s celebration in a Istanbul nightclub claimed by the Islamic State militant group.
The case was one of two that the Supreme Court weighed in its current term aimed at holding internet companies accountable for contentious content posted by users – an issue of growing concern for the public and U.S. lawmakers.
The justices on Thursday, in a similar case against Google-owned YouTube, part of Alphabet Inc, sidestepped ruling on a bid to narrow a federal law protecting internet companies from lawsuits for content posted by their users — called Section 230 of the Communications Decency Act.
That case involved a lawsuit by the family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally shot in an Islamic State attack in Paris in 2015, of a lower court’s decision to throw out their lawsuit.
The Istanbul massacre on Jan. 1, 2017, killed Alassaf and 38 others. His relatives accused Twitter of aiding and abetting the Islamic State, which claimed responsibility for the attack, by failing to police the platform for the group’s accounts or posts in violation of a federal law called the Anti-Terrorism Act that enables Americans to recover damages related to “an act of international terrorism.”
Twitter and its backers had said that allowing lawsuits like this would threaten internet companies with liability for providing widely available services to billions of users because some of them may be members of militant groups, even as the platforms regularly enforce policies against terrorism-related content.
The case hinged on whether the family’s claims sufficiently alleged that the company knowingly provided “substantial assistance” to an “act of international terrorism” that would allow the relatives to maintain their suit and seek damages under the anti-terrorism law.
After a judge dismissed the lawsuit, the San Francisco-based 9th U.S. Circuit Court of Appeals in 2021 allowed it to proceed, concluding that Twitter had refused to take “meaningful steps” to prevent Islamic State’s use of the platform.
President Joe Biden’s administration supported Twitter, saying the Anti-Terrorism Act imposes liability for assisting a terrorist act and not for “providing generalized aid to a foreign terrorist organization” with no causal link to the act at issue.
In the Twitter case, the 9th Circuit did not consider whether Section 230 barred the family’s lawsuit. Google and Meta’s Facebook, also defendants, did not formally join Twitter’s appeal.
Islamic State called the Istanbul attack revenge for Turkish military involvement in Syria. The main suspect, Abdulkadir Masharipov, an Uzbek national, was later captured by police.
Twitter in court papers has said that it has terminated more than 1.7 million accounts for violating rules against “threatening or promoting terrorism.”
Montana Governor Greg Gianforte on Wednesday signed legislation to ban Chinese-owned TikTok from operating in the state, making it the first U.S. state to ban the popular short video app.
Montana will make it unlawful for Google’s and Apple’s app stores to offer the TikTok app within its borders. The ban takes effect January 1, 2024.
TikTok has over 150 million American users, but a growing number of U.S. lawmakers and state officials are calling for a nationwide ban on the app over concerns about potential Chinese government influence on the platform.
In March, a congressional committee grilled TikTok chief executive Shou Zi Chew about whether the Chinese government could access user data or influence what Americans see on the app.
Gianforte, a Republican, said the bill will further “our shared priority to protect Montanans from Chinese Communist Party surveillance.”
TikTok, owned by Chinese tech company ByteDance, said in a statement the bill “infringes on the First Amendment rights of the people of Montana by unlawfully banning TikTok,” adding that they “will defend the rights of our users inside and outside of Montana.”
The company has previously denied that it ever shared data with the Chinese government and has said it would not do so if asked.
Montana, which has a population of just over 1 million people, said TikTok could face fines for each violation and additional fines of $10,000 per day if it violated the ban. Apple and Google could also face fines of $10,000 per violation per day if they violate the ban.
The ban will likely face numerous legal challenges on the ground that it violates the First Amendment free speech rights of users. An attempt by then-President Donald Trump to ban new downloads of TikTok and WeChat through a Commerce Department order in 2020 was blocked by multiple courts and never took effect.
TikTok’s free speech allies include several Democratic members of Congress, including Representative Alexandria Ocasio-Cortez, and First Amendment groups such as the American Civil Liberties Union.
Gianforte also prohibited the use of all social media applications that collect and provide personal information or data to foreign adversaries on government-issued devices.
TikTok is working on an initiative called Project Texas, which creates a standalone entity to store American user data in the U.S. on servers operated by U.S. tech company Oracle.
When researchers at a nonprofit that studies social media wanted to understand the connection between YouTube videos and gun violence, they set up accounts on the platform that mimicked the behavior of typical boys living in the United States.
They simulated two 9-year-olds who liked video games. The accounts were identical, except that one clicked on the videos recommended by YouTube, and the other ignored the platform’s suggestions.
The account that clicked on YouTube’s suggestions was soon flooded with graphic videos about school shootings, tactical gun training videos and how-to instructions on making firearms fully automatic. One video featured an elementary school-age girl wielding a handgun; another showed a shooter using a .50-caliber gun to fire on a dummy head filled with lifelike blood and brains. Many of the videos violate YouTube’s policies against violent or gory content.
About a dozen a day
The findings show that despite YouTube’s rules and content moderation efforts, the platform is failing to stop the spread of frightening videos that could traumatize vulnerable children — or send them down dark roads of extremism and violence.
“Video games are one of the most popular activities for kids. You can play a game like ‘Call of Duty’ without ending up at a gun shop — but YouTube is taking them there,” said Katie Paul, director of the Tech Transparency Project, the research group that published its findings about YouTube on Tuesday. “It’s not the video games, it’s not the kids. It’s the algorithms.”
The accounts that followed YouTube’s suggested videos received 382 different firearms-related videos in a single month, or about 12 per day. The accounts that ignored YouTube’s recommendations still received some gun-related videos, but only 34 in total.
The researchers also created accounts mimicking 14-year-old boys; those accounts also received similar levels of gun- and violence-related content.
One of the videos recommended for the accounts was titled “How a Switch Works on a Glock (Educational Purposes Only).” YouTube later removed the video after determining it violated its rules; an almost identical video popped up two weeks later with a slightly altered name; that video remains available.
A spokeswoman for YouTube defended the platform’s protections for children and noted that it requires users younger than 17 to get their parent’s permission before using their site; accounts for users younger than 13 are linked to the parental account.
“We offer a number of options for younger viewers,” the company wrote in emailed statement, “… which are designed to create a safer experience for tweens and teens.”
Shooters glorify violence
Along with TikTok, the video-sharing platform is one of the most popular sites for children and teens. Both sites have been criticized in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm. Critics of social media have also pointed to the links between social media, radicalization and real-world violence.
The perpetrators behind many recent mass shootings have used social media and video streaming platforms to glorify violence or even livestream their attacks. In a post on YouTube, the shooter behind the 2018 attack that killed 17 in Parkland, Florida, wrote “I’m going to be a professional school shooter.”
The neo-Nazi gunman who killed eight people earlier this month at a Dallas-area shopping center also had a YouTube account that included videos about assembling rifles, the serial killer Jeffrey Dahmer and a clip from a school shooting scene in a television show.
In some cases, YouTube has already removed some of the videos identified by researchers at the Tech Transparency Project, but in other instances the content remains available. Many big tech companies rely on automated systems to flag and remove content that violates their rules, but Paul said the findings from the Project’s report show that greater investments in content moderation are needed.
In the absence of federal regulation, social media companies must do more to enforce their own rules, said Justin Wagner, director of investigations at Everytown for Gun Safety, a leading gun control advocacy organization. Wagner’s group also said the Tech Transparency Project’s report shows the need for tighter age restrictions on firearms-related content.
Similar concerns have been raised about TikTok after earlier reports showed the platform was recommending harmful content to teens.
TikTok has defended its site and its policies, which prohibit users younger than 13. Its rules also prohibit videos that encourage harmful behavior; users who search for content about topics including eating disorders automatically receive a prompt offering mental health resources.
The head of the artificial intelligence company that makes ChatGPT told U.S. Congress on Tuesday that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.
His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.
What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.
And while there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.
Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, “How I would open this hearing?”
The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”
Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them.
Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.
Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.
Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”
Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM’s Montgomery asks Congress to take a “precision regulation” approach.
“This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.
US Commerce Secretary: US ‘Won’t Tolerate’ China’s Ban on Micron Chips
The United States “won’t tolerate” China’s effective ban on purchases of Micron Technology MU.O memory chips and is working closely with allies to address such “economic coercion,” U.S. Commerce Secretary…
China, South Korea Agree to Strengthen Talks on Chip Industry
China and South Korea have agreed to strengthen dialog and cooperation on semiconductor industry supply chains, amid broader global concerns over chip supplies, sanctions and national security, China’s commerce minister…
Regulators Take Aim at AI to Protect Consumers, Workers
As concerns grow over increasingly powerful artificial intelligence systems like ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI. Already,…