Thousands of cyber security professionals flocked to San Francisco this week for the annual RSA Conference, where executives, government workers, and researchers gather to talk the latest trends in protecting firms from hackers.
This year’s event had the unfortunate timing of coinciding with the coronavirus outbreak, which led to companies like IBM, AT&T, and Verizon dropping out of attending the event out of fears that employees could catch the virus.
But, the show carried on, and attendees—a few wearing face masks—soldiered on to hear topics including artificial intelligence’s impact on cybersecurity, the rise of easy-to-access-genetic testing services and how it impacts data privacy, and the Department of Defense’s decision to ban U.S. governments from buying telecommunication equipment from Chinese tech giant Huawei.
Here’s a few of the biggest takeaways from the conference sessions:
A leading cyber security expert shares his thoughts on A.I.
“We don’t understand why they’re working so well,” Shamir said. “And second, we don’t understand why they’re working so terribly.”
Shamir’s says technologists are having a difficult time explaining how neural networks learn to discover patterns in lots of data. These neural networks are so big, it can be like having a million interconnected calculators. So trying to find the few that made the most difference in reaching the conclusion can be a challenge.
Shamir noted how some A.I.-powered image recognition systems can confuse photos that look normal, but have been subtly altered by other A.I. systems, as in the case when Google’s technology mistook an image of a turtle with a rifle.
“Until we solve this problem, I think it can be very dangerous to use deep neural networks in autonomous vehicles and in making life and death choices in medicine,” Shamir said. “Machine learning has made tremendous advances in the last ten years, but there’s still many problems.”
In praise of paper
The most secure way of protecting the elections from hackers involves using trusty paper ballots, Ronald Rivest, a cryptographer and MIT professor, told conference attendees.
“Putting trust on electronic components that are hackable is just not the way to go,” Rivest said. Voting records stored on paper, as opposed to digital bits, can be more easily verified and are less likely to be tampered with, he believes.
As for newer technologies like blockchain, which some technologists have praised for possibly being a safe-proof way to record transactions between multiple parties, Rivest had some doubts.
He joked that blockchain presents dilemma of “garbage in, garbage stays forever,” a play on the saying “garbage in, garbage out,” which refers to the notion that analysis based on bad data will be inherently flawed. “Maybe it’s not the best for voting,” Rivest said.
Genetic testing, security, and privacy
The rise of off-the-shelf genetics-testing services from companies like 23andMe has led to some privacy advocates being concerned about potential consequences. After all, stolen computer passwords can always be changed, but DNA information is forever.
On a panel about security and genetic testing, Kaiser Permanente chief medical officer Dr. Patrick Courneya, said one of his biggest concerns involves privacy and transparency. When consumers have their genetics tested, they can sign consent forms that let companies use the data for further research. Courneya is worried that those current consent forms may not take into account future ways businesses can use that data as the technology continues to progress faster than data privacy regulations.
“I would argue the consent I give today about the use of my genetics may be fundamentally different in a year,” Courneya said. “I think we should not draw false comfort from the ways the structure is set up right now.”
Meanwhile, 23andMe chief legal and regulatory officer Kathy Hibbs said that while the focus on genetics and data privacy and security is important, most consumers should also be thinking about the security and privacy of their more standard health care records.
The battle between the DoD and Huawei
A panel that included representatives from both the U.S. Department of Defense and Huawei about the federal government’s blacklisting of the Chinese tech giant did not end in harmony.
Katie Arrington, the federal cyber information security officer of acquisitions, reiterated the government’s line that Huawei and its alleged ties to the Chinese government is a national security issue. She said that her agency has classified data, which she cannot share, that proves her point.
Because the data shows that there is a “known vulnerability” with Huawei equipment, the “recommendation was made to take Huawei out.”
“The law is the law,” Arrington said. “I work at the DoD—I’m going to enforce the law.”
Huawei chief security officer Andy Purdy, however, criticized the federal government’s decision, explaining that there are other ways to ensure that the government purchases equipment without so-called backdoors, without completely blacklisting a company. The “rip and replace will take more time and money than anticipated,” Purdy said.
Additionally, he said Huawei has created features intended to provide more transparency to its customers and alleviate potential concerns. But Kathryn Waldron, a research fellow at the R Street Institute think tank, explained that because federal policy makers believe tech companies in China are synonymous with the Chinese government, they aren’t likely to be won over by Huawei’s transparency features.
More must-read stories from Fortune:
—New tech-centric Mastercard CEO has his eyes on the fintech prize
—Did the ‘techlash’ kill Alphabet’s city of the future?
—How technology is changing how we volunteer
—Credit Karma was acquired rather than pursuing an IPO. Will more companies follow suit in 2020?
—Half of U.S. local government offices haven’t upgraded their ransomware defenses since 2019’s online crime spree
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.