A local reporter spent eight hours interviewing students and faculty in the computer science and information assurance (IA) programs at Norwich University a couple of days before I began writing this article. At one point, he asked half a dozen of our students what they felt was special about their education in the School of Business and Management. One young man responded immediately that the focus in our programs is service to organizations in furtherance of their mission-critical objectives; in contrast, he said, he had the impression that some of the students he had met from well-established programs at other institutions participating in various computing and security competitions were focused primarily on details of technology. “People use technology to achieve business goals,” he said, “not just because technology is interesting and fun.” Another student laughed and pointed at me: “Prof Kabay has drilled us in every course with his motto, ‘Reality trumps theory.’” Students nodded and explained that they had learned never to solve problems by applying rote learning as if recipes and checklists could be applied without careful consideration of the specific requirements of any situation.
Continue reading
Vulnerability management is the embodiment of continuous process improvement in system security.
In a recent discussion in the Norwich University IS342 (Management of Information Assurance) course in the Bachelor of Science in Computer Security and Information Assurance, the class reviewed Rebecca Gurley Bace’s chapter 46, “Vulnerability Assessment” from the Computer Security Handbook, 5th Edition.
Bace explains that vulnerability management includes several phases:
- Assessing deployed information systems to determine their security status;
- Determining corrective measures
- Managing the appropriate application of the corrections.
Continue reading
The following contribution is from information security expert Michael Krausz in Vienna with editorial and textual contributions from Mich Kabay.
At a courthouse in Austria, on 28 February 2012, a security-training exercise went wrong.
In the weeks running up to the events of 28 February, police forces and the courthouse management were involved in planning what they believed to be a bright idea: conducting an exercise for courthouse staff on how to respond to someone running amok within the building.
Continue reading
Francis Cianfrocca, a leading expert on Advanced Persistent Threats, presents an overview of the issues. What follows is Mr Cianfrocca’s work with contributions and edits from M. E. Kabay.
Advanced Persistent Threat (APT) has received a great deal of attention in recent months[1] due, in large part, to a spate of highly-publicized successful attacks against the information assets of major enterprises and corporations. Much of the recent focus on APT has come as a result of the RSA breach,[2] believed to be an APT-style attack[3], which led directly to a handful of serious attacks “down-line” within several of RSA’s major enterprise customers.[4]
Continue reading
Because people execute security policies (or violate them), hiring, managing and (alas) firing are important aspects of information assurance (IA) management. In a recent class discussion of personnel policies and security, the IS342 Management of Information Assurance class reviewed some of the fundamental principles of personnel and security.
To start with, we face two fundamental problems in all discussions of crime, especially white-collar crime, and particularly computer crime: we have incomplete ascertainment and we have incomplete reporting.
The problem of ascertainment lies in the difficulty of identifying crimes or errors that compromise confidentiality and control, at least until the malefactors reveal the data leakage by using the purloined information. And unfortunately, we don’t yet have any centralized reporting of computer crimes or legal requirements for contributions to such a central database – so we lack reliable estimates of the frequency and severity of computer security breaches.
Nonetheless, a broad consensus among IA practitioners does support the belief that a sizable proportion of damage to computer systems may be from errors and omissions – perhaps even half. The attacks from the outside of systems and networks have increased over the last two decades because of the huge increase in interconnectivity due to wide use of the Internet.
Under these conditions, selecting appropriate employees can be a major contribution to effective IA. This review looks at hiring, management and firing from the perspective of IA managers.
Continue reading
Why Does Style Matter?
One of the major areas of my work and operations management has been the development and refinement of information-security policies. Over the years, I have seen cases in which well designed policies have been implemented ineffectively in part because of style of presentation. Style is defined in the Encarta Dictionary as a “way of writing or performing: the way in which something is written or performed, as distinct from its content.” Style includes the wording and tone, organization, presentation, and even maintenance of the policies. Style influences the reception and effectiveness of policies.
Continue reading
The following article is a contribution from John Laskey. Everything that follows is entirely John’s work with minor edits from Mich.
Good risk management is fundamental to the security profession. When risks are overlooked or underplayed they can have a direct impact on a business and its reputation. When risks are overplayed, security becomes an inhibitor to productivity and challenges our credibility as professionals. And whenever security is seen as unnecessary, wasteful or uncompetitive then the stock of all security professionals goes down.
Sophisticated tools have been developed to assess security risks. The complexity and responsiveness of these tools require good levels of trust and understanding between security professionals – who understand the risk – and senior executives – who own the assets at risk. So if we wrap up the tools, the experts and the executives inside a good governance structure then we ought to get good security. But there’s something missing.
Continue reading
Managing information assurance (IA) effectively and efficiently depends on defining our goals clearly, laying out how we will achieve our goals, and defining metrics by which we can tell if we are succeeding.
In a recent session of the Management of Information Assurance (IA) course at Norwich University, students and I spent an hour discussing how to define and apply fundamental concepts of security policy.
Four terms recur in discussions of all forms of IA management: the word policy itself, controls, standards, and procedures.
- Policy defines how what we intend to accomplish to protect information;
- Controls define the general approaches for implementing the desired protection;
- Standards stipulate specific and widely accepted measures for how well we implement controls consistent with policy; and
- Procedures define the specific operations we must carry out to meet standards in achieving the controls that reflect policy.
Typically, we segregate these four elements of IA management: policy is defined as a high-level definition that evolves relatively slowly – perhaps with quarterly or annual reviews by upper management. Controls and standards should be adjustable by line management (e.g., an information security officer) without having to bother upper managers (e.g., the chief information security officer or chief information officer) but subject to periodic review. Procedures ought to be adjustable by staff to meet conditions that can change from day to day as new threats and vulnerabilities are discovered; no one wants to have to ask an upper manager whether it’s acceptable to warn users about a new phishing trick that appeared this morning.
Continue reading
A colleague recently asked me how vulnerable oil-industry installations are to cyberattack; unfortunately, the consensus seems to be “Very.”
In February 2011, a report surfaced that “Computer hackers working through Internet servers in China broke into and stole proprietary information from the networks of six U.S. and European energy companies, including Exxon Mobil Corp., Royal Dutch Shell Plc and BP Plc….”[1] Other targets included “Marathon Oil Corp., ConocoPhillips and Baker Hughes Inc., …. [a] Houston-based provider of advanced drilling technology.” Publicly traded oil-industry companies hacked by industrial spies or saboteurs might be sued by shareholders if they fail to disclose such attacks: “Investors might also argue they had a right under U.S. securities laws to be informed of the thefts, which a judge might construe as a ‘material’ fact that should have been disclosed….”
Continue reading
Maria Dailey is a senior in the Bachelor of Science in Computer Security and Information Assurance (BSCSIA) in the School of Business at Norwich University. She recently submitted an interesting essay in the IS455 Strategic Applications of Information Technology course, and I suggested to her that we work together to edit and expand it for publication. The following is the result of a close collaboration between us.
* * *
How would you feel about having a computer insider your body – other than your own brain?
A nanocomputer is one which is invisible to the human eye, but operates like current computers.
“You might stop to consider what the world might be like, if computers the size of molecules become a reality. These are the types of computers that could be everywhere, but never seen. Nano sized bio-computers that could target specific areas inside your body. Giant networks of computers, in your clothing, your house, your car. Entrenched in almost every aspect of our lives and yet you may never give them a single thought.”[1]
Nanotechnology research is proceeding vigorously:
- In 2001, Wired reporter Geoff Brumfiel wrote that researchers at Bell Labs reported that they had “built a Field-Effect Transistor (FET) from a single molecule.” One of the researchers “said this special ability might allow computer circuits to become integrated into credit cards and clothing. The fact that the molecule can be stored easily in a liquid solution also opens up the possibility of using ink-jet type technology to ‘print’ processors on sheets of plastic.”[2]
- Brumfiel also pointed to the startling achievement of researchers at Harvard University who “made semiconducting nanowires that assembled themselves into simple circuits.” Luminary scientist Ralph Merkle,[3] one of the founders of modern cryptography, and currently a researcher in nanotechnology, commented explained to Brumfiel that “Molecular processors… could allow computers to see, hear and interact with humans much more directly.”[2]
- In mid-2011, “A group of Turkish researchers at an Ankara university have manufactured the longest and thinnest nanowires ever produced, by employing a novel method to shrink matter 10-million fold.”[4] Such nanowires could play a valuable role in nanoscale computing.
- A Website devoted to monitoring developments in nanoscale computing has the motto, “Small is beautiful; very small is very beautiful.”[5] The current page alone has 30 entries on a multitude of nanotechnology topics, with more than a thousand more archived. Examples include
- DNA Nanotechnology – a basis for biologically-based nanocomputers;[6]
- Augmented Reality – Microsoft and University of Washington scientists are working on contact lenses with digital displays providing additional information on demand;[7]
- Building an Artificial Brain – University of Southern California researchers “have made a significant breakthrough in the use of nanotechnologies for the construction of a synthetic brain. They have built a carbon nanotube synapse circuit whose behavior in tests reproduces the function of a neuron, the building block of the brain.”[8]
Continue reading