Welcome to new board oversight duties…It is one of the great things about board work…it is ever changing and evolving. Every year there is a shift in corporate governance standards in an effort to evolve along with the rapidly changing business landscape and stay aligned with the shifting priorities of investors and regulators.
At a recent audit committee meeting we were briefed by our Big Four accounting firm on cyber-risk. They referenced a two-page notification from the Director of the Cybersecurity and Infrastructure Security Agency, Jen Easterly, sent to Directors on February 25, 2022 urging Corporate Directors to be mindful and prepared for cyber-risks during the evolving Ukraine crisis. (See link: Urgent Letter from the Director of CISA addressing NACD Members – February 25, 2022 (nacdonline.org)) The communication from Director Easterly expresses heightened cyber-risks emanating from Russian threat actors acting perhaps in retaliation against economic and other sanctions.
It’s highly unusual for a government agency (CISA) to reach out directly to corporate board members.
Additionally, on March 9th, 2022 the SEC issued a 129-page cyber regulation proposal. Proposed rule: Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure
What is particularly noteworthy is the brevity of the comment period – only 30 days – on wide-sweeping rules and requirements that will affect registrants and Corporate Directors alike, perhaps akin in its breadth to the Sarbanes-Oxley Act nearly twenty years ago (which had significant unforeseen burdens and and costs for corporations).
(See link to submit comments: SEC.gov | How to Submit Comments)
Another noteworthy factor is that the proposed regulations would affect both small companies, as well as large multinationals. Understandably so, given virtually all companies are connected by the internet and most supply chains include small dealers’, distributors and manufacturers, the proposed regulations do not exclude companies based on size. We all recall hearing about how the breaches of larger companies often originate from their less vigilant or resource challenged smaller companies that are part of their supply chain, or their distribution dealer and distributor network.
The new regulations pose questions to the board such as: Does the board have a cyber expert? What are their credentials and how was their expertise determined? How does the Board execute its oversight of cyber-risks? Does the company consider cybersecurity risks in its business strategy, financial planning, and capital allocation processes?
While the proposed regulation does not mandate which Board Committee should own cyber-risk in its remit, that remains a topic for Boards to contemplate. There are pros and cons to consider, and some observe that the audit committee may be too overburdened as it is, and do they have the time and expertise to oversee ever-growing cyber risks? Additionally, audit committees already must observe heightened financial reporting deadlines, so that is another consideration to be weighed by the Board.
Companies are also being asked, do you have a Chief Information Security Officer? Where does that person report? What are their credentials? Embedded in these questions is a subtle determination of whether a CISO should report independently of the IT organization, perhaps analogous to the way internal audit functions generally don’t report within the finance organization.
There are specific proposed regulations that are complicated, if not concerning. For example, if there is a material cyber incident the company would have only four days in which to publicly disclose it upon determining that the incident was indeed, material. Determining materiality involves both quantitative and qualitative evaluations; that process needs to be re-examined. Further the regulations require that any prior incident that doesn’t rise to the level of materiality may subsequently be deemed material when aggregated with other subsequent and similar cyber incidents. The process and protocols for this aggregation will require very thorough Board oversight and input.
There are also inferred questions are how cyber ready is the board? Do you have external expert briefings for the board? External experts doing penetration testing? What kind of internal training are you overseeing within the company? Do the Directors have external courses and credentials they are expected to receive in order to stay current?
Another issue that speaks to the Board’s oversight is, do you have adequate insurance and planning in the event of a cyber breach? This raises the question of the adequacy of the company’s cyber insurance, and whether the company is financially modeling cyber-risks based on varying probabilities, from an ordinary event all the way to “Black Swan” scenarios
As is widely reported, insurance companies are scaling back their coverage of ransomware attacks. The recent court case involving Merck’s cyber insurance claim arising from the impact of the NotPetya malware illustrates both the cyber-risk (media reports damages of over $1 Billion) and the difficulty in collecting on a policy claim. The Merck cyber insurance case remains in litigation, now nearly 5 years after the NotPetya attack.
In the new regulation all companies are covered in the proposed SEC regulation, regardless of sector You can imagine a traditional manufacturing company, for example, an iron smelting company, would say why does this affect me? Well, if you have seen some of the reporting lately, we have Russian mobs who are breaching the reporting agents of companies right before earnings reports so that with the insider information they can front run the stock market. There was a recent media report of an insider trading breach about eminent earnings reporting for public companies where the Russian mob made $80 million in illegal trades on the public market.
One of the things that strikes me as I ponder current events is that we are proposing such a broad sweeping set of regulations without clear ways that boards can satisfy the burden of the regulations. In doing this, are we set ourselves up for a flood of plaintiff litigation?
I would urge companies to quickly comment and push back hard on these broad regulations which put a huge burden on companies.
Of course, we need to do the right things as directors. Of course, we need to be cyber ready. We need to take this seriously, which presumably all directors already do. We are all diligent, engaged, highly committed stewards for all of our stakeholders. That said, we don’t need penalties, threats, huge bureaucratic regulatory burdens and an avalanche of plaintiff lawsuits.
The other take away is clearly we must all go on high alert as public company and private company directors and anticipate a serious threat of cyber-attack on our companies.
We need cyber training for all employees.
We also have to beef up our external third-party resources.
It’s probably a very opportune time to have an outside third-party cyber penetration testing firm review and do white and gray hat cyber exercises on your systems.
Evaluate your back up systems (assuming that there may be serious attack of the cloud) as well as look at the level of your current cyber systems and consider upgrading your security and cyber software testing.
It might also be a good time to look at building a relationship with a cyber managed services provider who can do external monitoring to augment what you currently have in house.
There is a lot to consider in these unprecedented times but I urge boards and companies to please consider commenting on the new SEC proposed regulation rapidly. I am sharing the link again here, so your voices may be heard. SEC.gov | How to Submit Comments
In July, the four Big Tech CEOs appeared before the House Antitrust subcommittee and shed light to congress for the first time pushing back and lobbying against monopoly power and anti-competition. This was the last step of an investigation that began in June 2019. It also provided the public with their first look at internal documents highlighting the companies underlying strategies.
Jeff Bezos, CEO of Amazon was questioned for mistreating third-party sellers on their platform to market their own products. Internal documents were presented that showed Amazon referring to certain third-party sellers as “internal competitors” and raised the question to congress of how they can best serve a company on their marketplace that they also consider as a competitor. Congress also presented internal emails that Amazon sold diapers at a loss in 2009 to drive out Diapers.com, who they referred to as their “#1 short term competitor,” and then raised prices afterword’s.
Sundar Pichai, CEO of Google parent company Alphabet, was questioned about Google’s web search dominance and their digital advertising. Google processes around 90% of all US web searches, and their advertising business generates almost of all the $160 billion annual sales. They were questioned about controlling search verticals to traffic their ad business. They were also challenged of forcing partners to bundle Google apps through their dominance as the most popular mobile software.
When it came to Apple CEO Tim Cook, the questions from the House Antitrust committee focused on the companies “tax” on advertising. Apple receives up to a 30% commission on in-app sales and subscriptions from the apps on their app store; this has forced companies to lose a lot of revenue as there is virtually no alternate platform for them to sell their apps through.
Facebook CEO Mark Zuckerberg, was questioned for their practice of buying out growing competitors such as Instagram and WhatsApp, which have been called anticompetitive acquisitions and are allegedly illegal by the Clayton Act of 1914.Jerrod Nadler, the chairmen of the House Judiciary Committee, commented “This is exactly the type of anticompetitive acquisition that the antitrust laws were designed to prevent.”
Following these hearings, on October 20th 2020 the Justice Department filed a highly anticipated antitrust lawsuit against Google alleging that the tactics Google uses to preserve a monopoly for its search engine are anticompetitive.
The suit alleges that Alphabet (Google’s parent company) uses an unlawful set of complex and exclusionary business agreements to block any competitors from gaining market share.
Boards needs to pay attention to this hearing and subsequent lawsuit as these ongoing inquiries into the practices of dominant tech companies have the potential to reshape the competitive landscape and lead to revamped regulations.
I had the opportunity to speak with Sally Hubbard, antitrust expert and Director of Enforcement Strategy at Open Markets Institute, and gather some of her insights and takeaways.
She pointed out that this is a landmark for businesses, such as in the case of Microsoft in 1998. The DoJ and the attorneys general of 20 states filled antitrust charges against Microsoft accusing the company of forcing computer operators to preinstall Internet Explorer on their computers if they wanted to use Microsoft’s operating system. With Microsoft’s 95% market share at the time, computer manufactures were forced to sell their products with Internet Explorer installed. Microsoft was also found to have made it difficult for consumers to install other competing software, such as Netscape, on Windows operated computers. Microsoft ultimately drove Netscape out of the web browser market by giving away Microsoft’s browser software for free, bundled with its monopoly operating system.
“If the DoJ and other states didn’t bring this case against Microsoft in the 90s, there probably wouldn’t be a Google today,” Hubbard added. “Internet explorer could have become a monopoly search browser.”
Another landmark case that Hubbard mentioned was when AT&T was broken up in 1984. The AT&T breakup probably helped upstart fiber-optic long-distance firms Sprint and MCI, and gave consumers the option between choosing from different carriers which drove competition and lowered prices for consumers over time.
Both of these landmark antitrust cases spurred innovation and healthy competition that ultimately benefited the consumer.
“The innovation brought about from these major changes in the history of anti-trust have unlocked markets and allowed companies to compete based on merit” said Hubbard.
Consumers are also gaining a heightened awareness of the drawbacks from monopolistic power, especially through the lens of data privacy. In the past, so called “surveillance capitalism” has been highly profitable with little regulation and transparency.
Today, consumers are beginning to understand how much of their data is wielded by these tech giants – Netflix aired a docudrama film titled “The Social Dilemma” which explores the dangerous impact of social networking with former tech execs ringing the alarm bell over what their own creations have become. “The Social Dilemma” made headlines with many calling it a “huge wakeup call for consumers”.
Boards need to take a closer look at their Enterprise Risk Management (ERM) and review their data privacy practices. Boards should ask management to proactively decide at a high level the company view on data privacy and how this fits with their brand promise. Better privacy practices are a good long term business move as consumers are increasingly more concerned with how their data is stored and used.
Board members would be well served to stay up to date on these ongoing developments and may want to consider setting aside time during the next board meeting to review their own company’s data practices to ensure they are aligned with regulatory standards and discuss how a stricter regulatory environment may impact their business.
Often, when new regulations are introduced in an industry, they can bear unintended consequences for the future.
Since this past summer, legislatures have been contemplating a law relating to cyber defense called the Active Cyber Defense Certainty Act, or ACDC. The bill is intended to limit the negative consequences for parties that engage in computer fraud and abuse when responding to or defending themselves against cyber intrusions.
It may seem like a straight-forward act of protection at first appearance, but the bill has evoked a lot of big questions as it’s worked its way through the government—especially from the business vantage point.
Cyber Pros & Cons
And with good reason. If or when the act becomes law, one concern is that boards and executives will be asked to make decisions on “hacking back”—meaning using a protocol of active cyber defense—when they really are not in the best position to do so.
Given the way cyber attacks operate, i.e. off-grid in the unregulated dark web, any counter action, even by a government entity, runs the risk of being founded on incomplete and/or misleading information in the first place.
These decisions could potentially have disastrous consequences not just for companies, but for the global economy as a whole.
Proponents of the ACDC bill argue that defenders would benefit from tools and rights, as long as they observe a version of the following protocol:
1. Establish attribution of an attack.
2. Disrupt the cyberattack without damaging another party’s computers or other property.
3. Retrieve and destroy stolen or compromised files.
4. Monitor the behavior of an attacker.
5. Utilize beaconing technology
The justification we’ve heard most loudly from legislatures is that a very small number of cybercrimes are prosecuted, leaving criminals to face no consequences for their illegal behavior. And that hacking back guidelines, listed above, can provide that deterrence.
But let’s examine what we see as perhaps the biggest reason why prosecution of cybercrimes is so rare: Attribution is difficult. Nearly impossible, in many cases. The anonymity that the internet provides, and the ability to be located almost anywhere in the world, contributes to this challenge. That it is easy for bad actors to falsify evidence and make an attack look like it originated from someone or somewhere when it didn’t makes matters all the more complicated.
An example of a so-called “false flag” operation was the hacking of the French TV network TV5Monde. The attack was made to look like it was perpetrated by an ISIS-affiliated group. But as it turned out, the attackers were in fact tied to the Russian government.
The reality is that even governments and agencies with ample enough resources to invest in a defensive strategy struggle with attribution. How then can we expect private enterprises with more meager intelligence resources to accomplish this effectively and with minimal errors?
What happens if a well-intentioned defender truly believes they’ve identified the source of a cyber crime, and even has evidence that points to a specific actor—but it turns out they were wrong? Would the company and the individual be prosecuted? Do they have safe harbor protection?
As written, the ACDC bill is murky in this regard at best. It appears to offer a defense strategy that someone indicted under the original Computer Fraud and Abuse Act could use. By saying that they were engaging in hack-back efforts, even a malicious party could theoretically get off the hook and avoid persecution using the ACDC Act.
Even the first objective of the bill—“Establish attribution of an attack”—presents a chicken-and-egg problem. How do you establish attribution without the active defense? And how do you engage in active defense if you don’t have attribution?
If an organization were attacked, for example, the bill suggests an objective of the hack-back activities is for the victim to understand where the attack originated. But if they don’t already know that, who are they actually hacking back in the first place?
Most cyber criminals worth their salt will not use their own systems to launch an attack.
For executives and corporate officers especially, this fact will almost inevitably cause significant collateral damage.
Multiple layers of obfuscation and indirection are standard in this criminal realm. Often, perpetrators will look to Internet of Things systems because they allow them to use the devices of unsuspecting individuals and even resources running on cloud service providers like Amazon, Microsoft, or Google.
Do you really want to face the blowback from launching an offensive at the likes of Amazon or Facebook because someone used their platforms for a cybercrime—especially if they are not the ultimate target? This is what occurred with the Mirai botnet event, in which a group of adolescent hackers wreaked havoc on the web by taking advantage of such IoT systems technologies.
Ultimately, this illustrates how for the most part hacking back leads is a slew of unintended harmful consequences for whoever owns a system or company proactively trying to protect itself from a cyber crime.
Who bears the burden
With the ACDC bill, corporations are burdened with deciding whether or not to act in their own self interest, and whether or not to risk doing damage to an unsuspecting victim—quite possibly a manufacturing company with a bunch of compromised IoT devices.
There are literally millions of bots out there. When it comes to the largest botnets—the networks that connect bots and help spread pernicious cyber viruses, attacks, and the like—who is the actual target of an active defense strategy?
In many cases, we just don’t know.
In other cases, it could be several groups or individuals, or even nation states. In still others, it could be an unsuspecting victim being taken advantage of by bad actors.
Other likely victims of collateral damage are the organizations involved, whether directly or indirectly. Depending on the severity of a cybercrime, a government or public body may force a company into a public reporting cycle, unintentionally triggering class actions and derivative lawsuits as well as damage to an organization’s public reputation.
The problem with upholding the law on the internet
Apparently both the current and former heads of the FBI think active defense is a bad idea. As FBI director Christopher Wray commented, “We don’t think it’s a good idea for private industry to take it upon themselves to retaliate by hacking back at somebody who hacked them.” Which, to be clear, is precisely what the ACDC mandates.
Former FBI director James Comey also expressed concern that any kind of active defense strategy could impede the FBI’s own law enforcement efforts. This is especially true now as cybercrime and geopolitics become more and more intertwined.
How do you ensure the nation state on the other end of an attack doesn’t consider this an act of war? And, what if those nations pass their own hacking back laws and use that as pretext to hack into our own corporations? What if during the active defense you unintentionally interfere with, destroy, or somehow affect data or resources that belongs to a third country, one where hacking is illegal?
If the ACDC does ultimately becomes law, one saving grace is that companies have to notify the FBI and the US Department of Justice before engaging in active defense. Perhaps those departments and agencies, as the implementors of the law, can police the process. But that feels like a risk in and of itself, and threatens to place an undue burden on security and defense agencies whose resources could be used elsewhere.
The next big cybercrime frontier
Regardless of whether the government passes this legislation, hacking back is not a viable security defense strategy. There is no precedent to show this kind of strategy is effective with any kind of criminal activity, let alone with cybercrime, where the dynamics are inherently so complicated and opaque.
On one hand, government efforts would be better spent defining rules of engagement, like the ones we have in the Geneva Convention.
In the likely event that it will pass, companies and their boards of directors should think long and hard before going the hack-back route, given the many unpredictable and unintended side effects that we’ve so far seen do more damage than good.
Our energy would be better spent building effective detection strategies, before we think about hitting back. Especially in today’s 24/7 news cycle, companies will be judged not on whether they hacked back, but on how effectively they detected a breach and how conscientiously they chose to respond, with the minimal amount of damage and fallout for all parties involved.
As with all processes, boards should ask management to review the current security strategy. If the hack-back option is part of that strategy, it’s probably worth adding to the agenda of your next board meeting.
There’s too much at stake to be surprised by a well-intended chief information security officer who doesn’t consider the fine print—including all the potential consequences.
Cyber-security is a hot topic at every company this year and it needs to be a board level discussion – the risk associated with cyber attack and data breaches is now clear from all the headlines. FedEx and Maersk forecast $300M in losses based on the NotPetya attack. According to data from Juniper Research, the average cost of a data breach will exceed $150 million by 2020 — and by 2019, cyber-crime will cost businesses over $2 trillion — a four-fold increase from 2015. The risks are not just financial, they could completely paralyze your business, it takes most businesses about 197 days to detect a breach on their network. So it’s clearly a significant enough risk that it should be addressed at the board level.
But where do you start and what should be the focus? A recent Gartner report says detection and response plans should be the top security priority for organizations. Prevention is no longer the primary focus of a cyber-security program; it’s a matter of quickly detecting breaches and having a plan in place to respond and mitigate. “The shift to detection and response approaches spans people, process and technology elements and will drive a majority of security market growth over the next five years,” said Sid Deshpande, principal research analyst at Gartner.
Boards should expect a shift in the cyber-security spending recommendations from their CISO in the coming year beginning with human capital. Because prevention has been the focus in the past, people skilled in detection and response are scarce and their services are expensive. On the equipment/software side, the need for better detection and response has created new security product segments, such as deception, endpoint detection and response (EDR), software-defined segmentation, cloud access security brokers (CASBs), and user and entity behavior analytics (UEBA). These new segments are creating net new spending but are also reducing spending on existing segments such as data security, enterprise protection platform (EPP) network security and security information and event management (SIEM). According to data gathered from Gartner, organizations spend an average of 5.6% of the overall IT budget on IT security and risk management.
Worldwide Security Spending by Segment, 2017-2019 (Millions of U.S. Dollars)Source: Gartner (August 2018)
Not only is the focus of cyber-security shifting, but the analysis of a successful system is changing as well. CISOs are measuring their security strategy in terms of the business value associated with quick damage limitation, in addition to threat prevention and blocking. The goal is to get better visibility across their security infrastructure to make better decisions during security incidents. This visibility will enable them to have a more strategic and risk-based conversation with their executive team and their board of directors.
Expect to see these shifts in focus from prevention to detection and response when your review your company’s cyber-security strategy. And don’t be surprised if information security is a larger line item in your next budget review. Worldwide spending on information security is expected to reach $113 billion by 2020.
By Betsy Atkins & Rahul Kashyap
A recent study from the National Association of Corporate Directors highlights that one in five directors is dissatisfied with the quality of cyber-risk information that the board gets from management. Board members who felt their company was properly secured against a cyber-attack fell to 37% in 2017 from 42% in 2016.
One of the primary reasons for this drop in cyber-security confidence is that most boards simply don’t feel qualified enough to push their chief security officer for answers on what vulnerabilities their company faces and how they’re protecting against today’s attacks. As a result, most board-level conversations are general in nature, such as, “Are we spending on the right things?”
Cyber-security needs to be a board-level discussion, and a vigorous one. Just consider the recent headlines illustrating the risks. FedEx and Maersk each forecast $300 million in losses tied to the NotPetya attack. This year, it is estimated cyber-crime will cost businesses more than $2 trillion—a four-fold increase from 2015. And according to data from Juniper Research, the average cost of a data breach will exceed $150 million by 2020. The risks are not just financial, they could completely paralyze a business.
So how can board members get their hands around the issue? One of the biggest problems boards face is that they simply don’t have enough of an understanding of how attackers target companies and what the proper response should be. Security needs to be more than a series of patches or spending on security technology. Board members need to be able to understand their organizations’ vulnerabilities in context with their security capabilities.
There are a lot of resources available for board members to educate themselves on the security challenges their businesses face. A great place to start is the NACD’s Director’s Handbook on Cyber-Risk Oversight, which lays out five principles creating the framework for a proactive means off addressing cyber risks. It’s a practical guide including specific tips, templates, and resources for implementation.
The board’s enterprise risk management committee should also discuss the organization’s cyber-security risk and preparedness directly with the executive team. In these discussions, there are three important points to understand.
First is what is being protected. Do we know what our assets are (IT devices, intellectual property, applications, etc.), especially in the autonomous, connected world we live in? How are we protecting those critical assets? How do we quantify cyber risk internally, and how is that tracked and benchmarked over time?
Second is who might attack. What are the threats that are the most concerning, and how have those changed over time? What is the model we are using to think about insider threats? How about threats originating in our supply chain?
Finally, discuss how the organization plans to defend against those attacks. Are we falling into the trap of assuming we can simply prevent every threat? What is our response strategy? Are we providing our security teams with the tools necessary to stop today’s attackers? How are we making sure we aren’t chasing the latest security and tech fad? What are our people and process challenges when it comes to security operations (burnout, training, knowledge management), and how are we managing them?
Once they have an overview of the risks and a framework, board members will be better equipped to drill down to their companies’ specific risks.
The cost and impact of cyberattacks and data breaches has been well defined—enough so that boards can no longer delegate the oversight of cybersecurity to the executive team. By understanding an organization’s vulnerabilities and position within the broader attack landscape, board members can better address shortcomings and potentially start mitigating those risks for their companies.
Betsy Atkins is CEO and founder of Baja Corporation and author of Be Board Ready. She currently sits on the boards of Wynn Resorts, SL Green Realty, Schneider Electric, and Volvo Cars. Rahul Kashyap is president and CEO of Awake Security. He previously was CTO at Cylance, a cybersecurity software company acquired by BlackBerry in 2019.
By Anna Akins, Technology Reporter
Smart speaker adoption has exploded in recent quarters, but so have concerns about their privacy risks.
Research firm IDC said a total of 99.3 million smart speaker units shipped globally in 2018, representing a 141% jump over the 41.2 million units shipped in 2017. The trend indicates that consumers are finding value in speakers that respond to voice commands to perform various tasks or services.
Technology companies maintain that listening to the recordings captured on smart speakers will help improve the functionality of the devices, but analysts and industry experts said they need be more transparent about their data practices and also provide users with more controls over their privacy. Failing to do so, they said, could result in significant reputational and regulatory consequences.
Speakers, staff listening
A Bloomberg News report in April first revealed that Amazon.com Inc. employees listen to voice recordings captured on the company’s Alexa-powered Echo smart speakers in the homes and offices of device owners. A month later, privacy groups filed a complaint with the U.S. Federal Trade Commission over privacy concerns related specifically to the company’s Echo Dot Kids Edition speaker, saying the product may be violating the Children’s Online Privacy Protection Act by collecting personal information from children under the age of 13.
A group of senators has also called for the FTC to investigate the matter.
Amazon contends that the recordings are meant to help the company improve speech recognition capabilities.
“We only annotate an extremely small number of interactions from a random set of customers in order to improve the customer experience,” an Amazon spokesperson said in an emailed statement. “For example, this information helps us train our speech recognition and natural language understanding systems, so Alexa can better understand your requests and ensure the service works well for everyone.”
Amazon currently dominates the smart speaker market, capturing 38.2% of the global market share of smart speakers shipped during 2018. Google LLC is not far behind, holding 30.3% of the global market share for smart speakers shipped in 2018, according to IDC.
Alexa software is designed to record audio after it detects a specific “wake word,” such as “Alexa,” but Amazon employees reportedly overhear more private interactions from Echo device owners if the Alexa software is unintentionally awakened.
Amazon also does not explicitly tell consumers that its staff is listening to their interactions with Alexa. Instead, the company says on its website: “We use your requests to Alexa to train our speech recognition and natural language understanding systems.”
Florian Schaub, a University of Michigan professor in the School of Information who has expertise in smart speakers and privacy, said in an interview that the somewhat “vague” privacy policies Amazon and its peers have developed makes it difficult for users to fully understand how their personal data is being handled.
“Amazon annotating some of its recordings makes a lot of sense to me because that’s the only way to improve the speech recognition technologies,” Schaub said. “But at the same time, the way it’s doing it at the moment kind of happens behind the back of the consumer.”
Schaub also said that large tech firms must be more “proactive” in educating users about the specific types of data they are collecting and why they are collecting it in order to build trust.
According to a recent IDC consumer survey, about 59% of respondents said they are “highly concerned” about the privacy of their smart home devices, while about three-fourths of those surveyed said they are at least “somewhat concerned.”
“Security and privacy is a major concern for most consumers,” Adam Wright, a senior analyst at IDC who conducted the survey, said in emailed comments. “Most consumers are unhappy or unsure about sharing information with first-party device makers but are decidedly against sharing information with third-party companies.”
Similarly, another recent study conducted by Parks Associates, a market research and consulting company, found that about 25% of U.S. broadband households that do not own a smart home device cite privacy and security concerns as the main reasons why.
Parks Associates also recently found that 44% of U.S. broadband households surveyed are “very concerned” that someone might gain access and control to their smart home products without their permission, while 21% said they were “concerned.”
“So much is riding on voice control,” Brad Russell, a research director at Parks Associates who has expertise in connected home technologies and data privacy, said in an interview. “It’s just a spider web of technology that’s going to be everywhere — in your car, in your workplace, in the mall.”
Russell said big tech companies must give consumers more granular controls of their data, noting it could be “catastrophic” if they were found to be misusing private information.
Looming legislation, regulation
In May, the California State Assembly’s privacy committee advanced a bill known as the “Anti-Eavesdropping Act” that would prohibit makers of smart speaker devices from sharing voice recordings with third parties. Under the bill, Amazon, Google, Apple Inc. and other smart speaker providers cannot store recordings unless consumers provide their consent in writing.
“Recent revelations about how certain companies have staff that listen in to private conversations via connected smart speakers further shows why this bill is necessary to protect privacy in the home,” Assemblyman Jordan Cunningham, R-Calif., the bill’s author who introduced the legislation in January, said in a statement.
Schaub said the U.S. should consider implementing a privacy approach similar to Europe’s sweeping new privacy law known as the General Data Protection Regulation, or GDPR, claiming current U.S. laws are a bit too “reactionary.”
“One of the big problems with how we think about regulatory frameworks with regards to privacy and consumer protections in the United States is that it’s very piecemeal,” Schaub said. “Whereas in Europe, GDPR has a better approach in terms of clearly defining what you need to do when you’re collecting personally identifiable data.”
Among other provisions, the GDPR requires a company to obtain unambiguous affirmative consent from a user before collecting or processing the user’s personal data.
Given the potential for more legislation and regulation, it will be imperative for all technology companies to take the “highest business ethics position” to steer clear of violating consumers’ personal liberties, Betsy Atkins, who is CEO and founder of venture capital firm Baja LLC and has expertise in corporate governance matters, said in emailed comments.
Ultimately, the potential for brand damage and loss of trust, Atkins said, is not worth the risk.
“As board members this is one of the important moral compass/true north moments in balancing both short and long term and doing the appropriate stewardship for the shareholders,” she said.
The two recent incidents at Apple remind is us that Corporate Espionage is a serious threat that your board should be aware of. For the second time in 6 months, Apple, working with the FBI, is accusing a Chinese national engineer of stealing trade secrets related to self-driving cars. The investigation was started when another employee reported seeing Jizhong Chen taking photographs in a sensitive area.
Apple Global Security searched Chen’s computer where they found thousands of files containing Apple’s intellectual property, including manuals, schematics, and diagrams. They also found about a hundred photographs taken inside an Apple building. Authorities apprehended Chen the day before he was set to leave for China where it was learned he had applied for a job with a competing autonomous drive company.
Espionage is something that can affect companies of any size and the likeliest threat is from within your organization. G4S, a British multinational security services company headquartered in London estimates the cost of Corporate Espionage is as high as $1.1 trillion annually. By comparison, the impact of business-critical data being stolen remotely is estimated to be $400bn a year, G4S estimates. Solely focusing on the threat of a cyber-attack and ignoring the threat of corporate espionage, this is a serious risk that boards should consider. Far bigger boards may want to ask management what their internal processes and protections are. Likely there are none. The board can then request management seek external expertise and create a plan.
First gather the data on who, what and how:
Who? The spy could be a dissatisfied or disgruntled employee, a supplier, competitor, foreign government, anyone with access to sensitive data.
What are they after?
How can we protect our company?
These are just a few of the ideas for protecting against corporate espionage that your board might discuss. A key way to thwart spys is to continuously educate your employees. Educate them about potential threats your company faces and the role they play in the security of your organization. Teach them about simple security practices like changing passwords, and give them examples of social engineering attempts that they may encounter. Your employees are your first line of defense in corporate espionage and potentially your best as shown in the Apple example. It was an employee who noticed something odd and reported Jizhong Chen.
The board may wish to ask for and to review managements complete and comprehensive internal espionage policies and programs.