On June 21, 2017, Congress received a letter from 103 eminent cybersecurity experts and researchers (the list was compiled by NEDC (The National Election Defense Coalition) and partners) about reducing election hacking risks. You can link to the letter via Zack Wittaker’s article “Security experts warn lawmakers of election hacking risks” posted on June 21, 2017 at http://www.zdnet.com/article/security-experts-sign-warning-letter-amid-election-security-failings/
The letter contains three recommendations:
- Establish voter-verified paper ballots as the official record of voter intent.
- Safeguard against internet-related security vulnerabilities and assure the ability to detect attacks.
- Require robust statistical post-election audits before certification of final results in federal elections.
The second suggestion is standard Cybersecurity 101. That’s okay. Many election-related and voting systems do not comply with even these minimal standards. But these systems are acquired, maintained and run by states and many elections are at the state and local levels, which brings up the question as to why only federal elections are included in the third recommendation. However, it is the first recommendation that I find the most relevant, namely, going back to paper records. While I happen to agree with the suggestion, it does somewhat reverse our rush towards a paperless society.
Also, I recently read the Special Report in the April 10, 2017 edition of TIME magazine on “What It Will Take to Rebuild America” by David von Drehle. The article “A safer, smarter grid” has a subtitle of “The threat of cyberattack calls for a manual backup.”
As one who was heavily involved in disaster recovery planning over several decades, I often advocated a backup policy that included physical, often manual, systems, and I have been concerned about how when we move further from viable physical backups the more we introduce new online technologies. I still recall the work, in which I was directly involved, regarding backup for the U.S. securities industry over the Y2K data rollover. Certain suggestions, such as reverting to sending tapes by courier or setting up a call-in facility for order entry if the networks and some systems were down, were no longer feasible since practically all tape drives had been decommissioned in favor of telecommunications, and most of the telephone lines and local data-entry terminals had been disconnected. I worry that we are moving rapidly towards even greater dependency on the Internet for communications and transactions to the extent that it is no longer possible to revert to former methods not Internet-dependent.
It was pointed out to me some time ago that the reason there are so few backup facilities is that, if they had been included in initial proposals, those projects would never have been approved in the first place. This is because increasing costs by perhaps double would likely destroy the ROI of the project. The ploy then becomes one of obtaining approval for the initial system, without backup, and then, when the project is up and running, revisit the disaster backup issue with management. A better approach would seem to be to have a separate disaster backup and recovery fund to be used to provide the needed backup facilities without impinging on the primary system. If this sounds like a shell game … it is. However, within the typical workings of large organizations, it might be the only way to satisfy management and make sure that appropriate backup and disaster recovery will be instituted. After all, if backup facilities are considered overhead and allocated across all businesses, it may be more palatable. Also, it does make some sense to consider the whole disaster-recovery effort separately so that the assignment of funds can be prioritized based on the criticality of the systems involved. Many of us went through such an exercise for Y2K and it showed that, from a business perspective, not all systems need to be backed up to the same extent … some systems need hot real-time backups whereas for others it can take hours, days or even weeks before much impact of a failure is felt.
The lack of backup explains why, on so many occasions when disasters occur, you just have to wait until whichever systems, networks and infrastructure elements, which went down, are brought back up again. That’s acceptable for short outages that respond quickly to established recovery procedures, but certain disasters, such as bridges collapsing or a large electricity generator or transformer blowing up, could take months to rectify.
This is a Security Bloggers Network syndicated blog post authored by C. Warren Axelrod. Read the original post at: BlogInfoSec.com