The Cognitive Security Institute

Research Papers

Individual Deep Fake Recognition Skills are Affected by Viewers’ Political Orientation, Agreement with Content and Device Used

Stefan Sütterlin, Torvald F. Ask, Sophia Mägerle, Sandra Glöckler, Leandra Wolf, Julian Schray, Alaya Chandi, Teodora Bursac, Ali Khodabakhsh, Benjamin J. Knox, Matthew Canham, Ricardo Lugo

AI-generated “deep fakes” are becoming increasingly professional and can be expected to become an essential tool for cybercriminals conducting targeted and tailored social engineering attacks, as well as for others aiming for influencing public opinion in a more general sense. While the technological arms race is resulting in increasingly efficient forensic detection tools, these are unlikely to be in place and applied by common users on an everyday basis any time soon, especially if social engineering attacks are camouflaged as unsuspicious conversations. To date, most cybercriminals do not yet have the necessary resources, competencies or the required raw material featuring the target to produce perfect impersonifications. To raise awareness and efficiently train individuals in recognizing the most widespread deep fakes, the understanding of what may cause individual differences in the ability to recognize them can be central. Previous research suggested a close relationship between political attitudes and top-down perceptual and subsequent cognitive processing styles. In this study, we aimed to investigate the impact of political attitudes and agreement with the political message content on the individual’s deep fake recognition skills. In this study, 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message that was transported. Half of the presented videos were fabricated via lip-sync technology. In addition to the particular agreement to each statement made, more global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS). Data analysis revealed robust negative associations between participants’ general and in particular social conservatism and their ability to recognize fabricated videos. This effect was pronounced where there was a specific agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized as such compared to when watched on stationary computers. To the best of our knowledge, this study is the first to investigate and establish the association between political attitudes and interindividual differences in deep fake recognition. The study further supports very recently published research suggesting relationships between conservatism and perceived credibility of conspiracy theories and fake news in general. Implications for further research on psychological mechanisms underlying this effect are discussed.

Click Here to Read the Full Paper
Repeat Clicking: A Lack of Awareness Is Not the Problem

Matthew Canham

Although phishing is the most common social engineering tactic employed by cyber criminals, not everyone is equally susceptible. An important finding emerging across several research studies on phishing is that a subset of employees is especially susceptible to social engineering tactics and is responsible for a disproportionate number of successful phishing attempts. Sometimes referred to as repeat clickers, these employees habitually fail simulated phishing tests and are suspected of being responsible for a significant number of real-world phishing related data breaches. In contrast to repeat clickers, protective stewards are those employees who never fail simulated phishing exercises and habitually report phishing simulations to their security departments. This study explored some of the potential causes of these persistent behaviors (both good and bad) by administering six semi-structured interviews (three repeat clickers and three protective stewards). Surprisingly, both groups were able to identify message cues for identifying potentially malicious emails. Repeat clickers reported a more internally oriented locus of control and higher confidence in their ability to identify phishing emails, but also described more rigid email checking habits than did protective stewards. One unexpected finding was that repeat clickers failed to recall an identifier which they were explicitly informed that they would need to later recall, while the protective stewards recalled the identifier without error. Due to the small sample and exploratory nature of this study additional research should seek to confirm whether these findings extrapolate to larger populations.

Click Here to Read the Full Paper
Cognitive flexibility but not cognitive styles influence deepfake detection skills and metacognitive accuracy

Torvald F. Ask, Ricardo Lugo, Jonas Fritschi, Karl Veng, Jonathan Eck, Muhammed-Talha ÖzmenBasil, Bärreiter, Benjamin J. Knox, Stefan Sütterlin

Background: Deepfakes are AI-generated synthetic media that is increasingly used by cybercriminals to impersonate other individuals during remote social engineering attacks. Previous studies indicated that political orientation is associated with deepfake detection abilities, while being an IT professional is not. Little is known about the cognitive factors predicting individual differences in deepfake detection abilities. In this study, we assess the role of cognitive styles and cognitive flexibility on deepfake recognition skills and metacognitive accuracy. Methods: Cognitive styles and flexibility were measured using an embedded figures test that included a hidden cognitive flexibility task. 247 participants were tasked with rating a series of short video clips as either deepfake or authentic. Metacognitive accuracy was measured as prospective judgements of deepfake detection abilities controlling for actual performance. Results: Cognitive styles were not associated with deepfake detection performance. Cognitively flexible individuals were better at detecting deepfakes and had higher metacognitive accuracy than individuals who were less cognitively flexible. Conclusion: This is the first study assessing the role of cognitive styles and cognitive flexibility in deepfake detection skills and metacognitive judgements about deepfake detection abilities. Our results indicate that cognitively flexible individuals are better at detecting deepfakes and self-assessing social engineering susceptibility.

Click Here to Read the Full Paper
The UnCODE system: A neurocentric systems approach for classifying the goals and methods of Cognitive Warfare

Torvald F. Ask, Ricardo Lugo, Stefan Sütterlin, Matthew Canham, Daniel Hermansen, Benjamin J. Knox

Cognitive Warfare takes advantage of novel developments in technology and science to influence how target populations think and act. Establishing adequate defense against Cognitive Warfare requires examination of modus operandi to understand this emerging action space. This includes the goals and methods that can be realized through science and technology. Recent literature suggests that both human and nonhuman cognition should be considered as targets of Cognitive Warfare. There are currently no frameworks allowing for a unified way of conceptualizing short-term and long-term Cognitive Warfare goals and attack methods that are domain- and species-agnostic. There is a need for a framework developed through a bottom-up approach that is informed by neuroscientific principles to capture relevant aspects of cognition. The framework should be at a level of complexity that is actionable to decision-makers in war. In this paper, we attempt to cover the existing gap by proposing the Unplug, Corrupt, disOrganize, Diagnose, Enhance (UnCODE) system for classifying the goals and methods of Cognitive Warfare. The system is neurocentric by conceptualizing Cognitive Warfare goals from the perspective of how adversarial methods relate to neural information processing in an individual or society. The UnCODE system identifies five main classes of goals: 1) Eliminating a target’s ability to produce outputs, 2) degrading a target’s capacity to process inputs and produce outputs, 3) biasing a target’s input-output activity, 4) monitoring and understanding the input-output relationships in targets, and 5) enhancing a target’s capacity and ability to process inputs and produce outputs. Methods can be divided in two categories based on access to the target’s neural system: direct access and indirect access. The UnCODE system is domain- and species-agnostic and allows for interdisciplinary commensurability when communicating attack paths across domains. In sum, the UnCODE system is a unifying framework that captures that multiple methods can be used to reach the same Cognitive Warfare goals.

Click Here to Read the Full Paper
Socio-technical communication: The hybrid space and the OLB model for science-based cyber education

Benjamin J. Knox, Øyvind Jøsok, Kirsi Helkala, Peter Khooshabeh, Terje Ødegaard, Ricardo G. Lugo, Stefan Sütterlin

Lessons from safety-critical sociotechnical systems, such as aviation and acute medical care, demonstrate the importance of the human factor and highlight the crucial role of efficient communication between human agents. Although a large proportion of fatal incidents in aviation have been linked to failures in communication, cognitive engineering provides the theoretical framework to mitigate risks and increase performance in sociotechnical systems not only in the civil sector, but also in the military domain. Conducting cyber operations in multidomain battles presents new challenges for military training and education as the increased importance of psychological factors such as metacognitive skills and perspective-taking both in lower and higher ranking staff, becomes more apparent. The Hybrid Space framework (Jøsok et al., 2016) provides a blueprint for describing the cognitive and behavioral constraints for maneuvering between socio-technical and cyber-physical systems whilst cooperating, coordinating or competing with accompanying cognitive styles in the chain of command. We apply the Hybrid Space framework to communicative challenges in the military cyber domain and suggest a three-phase Orienting, Locating, Bridging model for safe and efficient communication between partners. Based on the educational principles of the Norwegian Defence Cyber Academy, we discuss the required skill-sets and knowledge in which cyber officer cadets are trained and taught early in their education, and how these refer to the theoretical framework of the Hybrid Space and the key principles of communication as defined in cognitive engineering.

Click Here to Read the Full Paper
On the Relationship between Health Sectors’ Digitalization and Sustainable Health Goals: A Cyber Security Perspective

Stefan Sütterlin, Benjamin J. Knox, Kaie Maennel, Matthew Canham, Ricardo G. Lugo

Digitalization in the health sector is, as in all societal domains, motivated by a range of anticipated positive consequences, such as increased effectiveness of prevention, treatment and follow-up, a generally improved resource efficiency and improved health care availability. This chapter will discuss how the ambition of achieving sustainable health goals may be affected by measures of digitalization. This will be done by covering digitalization from a cyber security perspective and how new potential threats to privacy may influence the public’s trust in their health care system, thereby affecting the envisaged goals of sustainable health care performance. It will also discuss further how digitalization in the healthcare sector unleashes an enormous potential in terms of cost-effectiveness, decentralization and the availability of specialist services and expertise, which risks and countermeasures these changes entail, and how they are currently dealt with and the role of cyber resilience in ensuring rapid digitalization does not come at the cost of essential trust mechanisms that are quid pro quo in the health sector. Transformation to a digitalized healthcare system must be governed and framed by a range of measures in various societal domains. The World Health Organization’s (WHO) report on the status of eHealth in the European region (WHO 2016) states that its member states “acknowledge and understand the role of e-Health in contributing to the achievement of universal health coverage and have a clear recognition of the need for national policies strategies and governance to ensure the progress and long-term sustainability of investments.

Click Here to Read the Full Paper
Planting a Poison SEAD: Using Social Engineering Active Defense to Counter Cybercriminals

Matthew Canham & Juliet Tuthill

By nearly every metric, the status quo of information security is not working. The interaction matrix of attacker-defender dynamics strongly favors the attacker who only needs to be lucky once. We argue that employing social engineering active defense (SEAD) will be more effective in countering malicious actors than maintaining the traditional passive defensive strategy. The Offensive Countermeasures (OCM) approach to defense advocates for three categories of countermeasures: annoyance, attribution, and attack. Annoyance aims to waste the attacker’s time and resources with the objective of not only deterrence but also to increase the probability of detection and attribution. Attribution attempts to identify who is launching the attack. Gathering as much threat intelligence on who the attacker is, provides the best possible defense against future attacks. Finally, attack involves running code on the attacker’s system for the purpose of deterrence and attribution. In this work, we advocate for utilizing similar approaches to deny, degrade, and de-anonymize malicious actors by using social engineering tools, tactics, and procedures against the attackers. Rather than fearing the threats posed by synthetic media, cyber defenders should embrace these capabilities by turning them against criminals. Future research should explore ways to implement synthetic media and automated SEAD methods to degrade the capabilities of online malicious actors.

Click Here to Read the Full Paper
Phish derby: Shoring the human shield through gamified phishing attacks

Matthew Canham, Clay Posey, Michael Constantino

To better understand employees’ reporting behaviors in relation to phishing emails, we gamified the phishing security awareness training process by creating and conducting a month-long ‘Phish Derby’ competition at a large university in the U.S. Employees competed against one another for prizes and were instructed to report emails as potential phishing attacks. Prior to the beginning of the competition, we collected demographic data and data related to the concepts central to two theoretical foundations: the Big Five personality traits and goal orientation theory. We found several notable relationships between demographic variables and Derby performance, which was operationalized from the number of phishing attacks reported and employee report speed. Several key findings emerged, including past performance on simulated phishing campaigns positively predicted Phish Derby performance; older participants performed better than their younger colleagues, but more education led to poorer performance; and individuals who used a mix of PCs and Macs at work performed worse than those using a single platform. We also found that two of the Big Five personality dimensions, extraversion, and agreeableness, were both associated with poorer performance in phishing detection and reporting. Likewise, individuals who were driven to perform well in the Derby because they desired to learn from the experience (i.e., learning goal orientation) performed at a lower level than those driven by other goals. Interestingly, self-reported levels of computer skill and the perceived ability to detect phish failed to exhibit a significant relationship with Derby performance. We discuss these findings and describe how focusing on motivating the good in employee cyber behaviors is a necessary yet too often overlooked component in organizations whose training cyber cultures are rooted in employee click rates alone.

Click Here to Read the Full Paper
Ambiguous self-induced disinformation (ASID) attacks: Weaponizing a cognitive deficiency

Matthew Canham, Stefan Sütterlin, Torvald Fossåen Ask, Benjamin James Knox, Lauren Glenister, Ricardo Gregorio Lugo

Humans quickly and effortlessly impose context onto ambiguous stimuli, as demonstrated through psychological projective testing and ambiguous figures. This feature of human cognition may be weaponized as part of an information operation. Such Ambiguous Self-Induced Disinformation (ASID) attacks would employ the following elements: the introduction of a culturally consistent narrative, the presence of ambiguous stimuli, the motivation for hypervigilance, and a social network. ASID attacks represent a low-risk, low-investment tactic for adversaries with the potential for significant reward, making this an attractive option for information operations within the context of grey-zone conflicts.

Click Here to Read the Full Paper
The Role of IT Background for Metacognitive Accuracy, Confidence and Overestimation of Deep Fake Recognition Skills

Stefan Sütterlin, Ric Lugo, Torvald F. Ask

The emergence of synthetic media such as deep fakes is considered to be a disruptive technology shaping the fight against cybercrime as well as enabling political disinformation. Deep faked material exploits humans’ interpersonal trust and is usually applied where technical solutions of deep fake authentication are not in place, unknown, or unaffordable. Improving the individual’s ability to recognise deep fakes where they are not perfectly produced requires training and the incorporation of deep fake-based attacks into social engineering resilience training. Individualized or tailored approaches as part of cybersecurity awareness campaigns are superior to a one-size-fits-all approach, and need to identify persons in particular need for improvement. Research conducted in phishing simulations reported that persons with educational and/or professional background in information technology frequently underperform in social engineering simulations. In this study, we propose a method and metric to detect overconfident individuals in regards to deep fake recognition. The proposed overconfidence score flags individuals overestimating their performance and thus posing a previously unconsidered cybersecurity risk. In this study, and in line with comparable research from phishing simulations, individuals with IT background were particularly prone to overconfidence. We argue that this data-driven approach to identifying persons at risk enables educators to provide a more targeted education, evoke insight into their own judgment deficiencies, and help to avoid the self-selection bias typical for voluntary participation.

Click Here to Read the Full Paper
Neurophysiological and Emotional Influences on Team Communication and Metacognitive Cyber Situational Awareness During a Cyber Engineering Exercise

Torvald F. Ask, Benjamin J. Knox, Ricardo G. Lugo, Ivar Helgetun, & Stefan Sütterlin

Cyber operations unfold at superhuman speeds where cyber defense decisions are based on human-to-human communication aiming to achieve a shared cyber situational awareness. The recently proposed Orient, Locate, Bridge (OLB) model suggests a three-phase metacognitive approach for successful communication of cyber situational awareness for good cyber defense decision-making. Successful OLB execution implies applying cognitive control to coordinate self-referential and externally directed cognitive processes. In the brain, this is dependent on the frontoparietal control network and its connectivity to the default mode network. Emotional reactions may increase default mode network activity and reduce attention allocation to analytical processes resulting in sub-optimal decision-making. Vagal tone is an indicator of activity in the dorsolateral prefrontal node of the frontoparietal control network and is associated with functional connectivity between the frontoparietal control network and the default mode network. Aim: The aim of the present study was to assess whether indicators of neural activity relevant to the processes outlined by the OLB model were related to outcomes hypothesized by the model. Methods: Cyber cadets ( N = 36) enrolled in a 3-day cyber engineering exercise organized by the Norwegian Defense Cyber Academy participated in the study. Differences in prospective metacognitive judgments of cyber situational awareness, communication demands, and mood were compared between cyber cadets with high and low vagal tone. Vagal tone was measured at rest prior to the exercise. Affective states, communication demands, cyber situational awareness, and metacognitive accuracy were measured on each day of the exercise. Results: We found that cyber cadets with higher vagal tone had better metacognitive judgments of cyber situational awareness, imposed fewer communication demands on their teams, and had more neutral moods compared to cyber cadets with lower vagal tone. Conclusion: These findings provide neuroergonomic support for the OLB model and suggest that it may be useful in education and training. Future studies should assess the effect of OLB-ing as an intervention on communication and performance.

Click Here to Read the Full Paper

Ben D. Sawyer, Dave B. Miller, Matthew Canham, and Waldemar Karwowski

Automation, autonomy, and artificial intelligence (AI) are technologies which serve as extensions of human ability, contributing self-produced, non-human effort (see Figure 1). These three terms encompass a set of computational tools that can learn from data, systems that act in a reasonable, and even human-like manner (Bolton, Machová, Kovacova, & Valaskova, 2018; Dash, McMurtrey, Rebman, & Kar, 2019; Shekhar, 2019). Computing of this nature has been pursued at least since the 1950s, when Simon predicted machines “capable … of doing any work a man can do” (Chase & Simon, 1973), and today such envisioned technology appears under the moniker Artificial General Intelligence (AGI). The desire for synthetic intelligent creations has been a staple of human desire for much longer, in various forms (Hancock et al., 2011; Schaefer et al., 2015). While AGI remains, at present, just a dream. A number of promising, and promised, future technologies under development require machines to learn, understand, and adapt to novel situations with at least the flexibility humans exhibit, albeit in a more limited context. The major technology underlying AI, machine learning (ML), is useful for engineering such autonomy, as it can learn from external data input, either with direct human oversight or without. In developing these highly useful technologies, knowledge from human factors and ergonomics (HF/E) can be of great use, especially to designers charged with the difficult task of dovetailing humans and machines in complex systems built to navigate sometimes chaotic environments. Technology serves as a greater extension of human ability each year, and optimal performance still results from hybrid human–machine teams...

Click Here to Read the Full Paper
Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering

Matthew Canham

How do you know that you are actually talking to the person you think you are talking to? Deepfake and related synthetic media technologies may represent the greatest revolution in social engineering capabilities yet developed. In recent years, scammers have used synthetic audio in vishing attacks to impersonate executives to convince employees to wire funds to unauthorized accounts. In March 2021, the FBI warned the security community to expect a significant increase in synthetic media enabled scams over the next 18 months. The security community is at a highly dynamic moment in history in which the world is transitioning away from being able to trust what we experience with our own eyes and ears. This presentation proposes the Synthetic Media Attack Framework to describe these attacks and offer some easy to implement, human-centric countermeasures. This framework utilizes five dimensions: Medium (text, audio, video, or a combination), Interactivity (pre-recorded, high asynchrony, low asynchrony, or real-time), Control (human puppeteer, software, or a hybrid), Familiarity (unfamiliar, familiar, close), and Intended Target (human or automation, an individual target, or a broader audience), to describe synthetic media social engineering attacks. While several technology-based methods to detect synthetic media such as currently exist, this work focuses discussion on human centered countermeasures to synthetic media attacks because most technology-based solutions are not readily available to the average user and are difficult to apply in real-time. Effective security policies can help users spot inconsistencies between the behaviors of a legitimate actor and a syn-puppet. Proof-of-life statements will effectively counter most virtual kidnappings leveraging synthetic media. Significant financial transfers should require either multi-factor authentication (MFA) or multi-person authorization. These ‘old-school’ solutions will find new life in the emerging world of synthetic media attacks.

Click Here to Read the Full Paper
Phishing for long tails: Examining organizational repeat clickers and protective stewards

Matthew Canham, Clay Posey, Delainey Strickland, Michael Constantino

Organizational cybersecurity efforts depend largely on the employees who reside within organizational walls. These individuals are central to the effectiveness of organizational actions to protect sensitive assets, and research has shown that they can be detrimental (e.g., sabotage and computer abuse) as well as beneficial (e.g., protective motivated behaviors) to their organizations. One major context where employees affect their organizations is phishing via email systems, which is a common attack vector used by external actors to penetrate organizational networks, steal employee credentials, and create other forms of harm. In analyzing the behavior of more than 6,000 employees at a large university in the Southeast United States during 20 mock phishing campaigns over a 19-month period, this research effort makes several contributions. First, employees’ negative behaviors like clicking links and then entering data are evaluated alongside the positive behaviors of reporting the suspected phishing attempts to the proper organizational representatives. The analysis displays evidence of both repeat clicker and repeat reporter phenomena and their frequency and Pareto distributions across the study time frame. Second, we find that employees can be categorized according to one of the four unique clusters with respect to their behavioral responses to phishing attacks—“Gaffes,” “Beacons,” “Spectators,” and “Gushers.” While each of the clusters exhibits some level of phishing failures and reports, significant variation exists among the employee classifications. Our findings are helpful in driving a new and more holistic stream of research in the realm of all forms of employee responses to phishing attacks, and we provide avenues for such future research.

Click Here to Read the Full Paper
Understanding Online Information Operations: Development of an Influence Network for Scientific Inquiry Testing Environment (INSITE)

Courtney Crooks, Tom McNeil, Ben Sawyer, Matthew Canham, David Muchlinski

Influence operations that promote propaganda, disinformation, and the propagation of social hysteria represent an existential threat to the United States. Effective countermeasures must be developed that can respond in near real-time and anticipate future adversarial actions. One of the most significant hurdles to developing effective countermeasures is the lack of a complex and dynamic testing environment that provides adequate assessment of algorithms and automated detection tools. Through the integration of social sciences with applied mathematics, dynamic, multi-factorial phenomena such as social influence and response behavior within complex social systems may be investigated with scientific rigor. The proposed capability will fulfill a critical need for developing a social-centric model to understand and assess complex influence factors and design of social engineering counter-measures that promote national security interests. To develop such a model, the research community also needs an accessible social media platform that is controlled by researchers, for researchers, and can be used to test new ideas in a realistic setting. A researcher-controlled platform will not only provide unprecedented access to data but will also allow researchers to test mitigation intervention strategies that would be impossible to implement in existing social media platforms.

Click Here to Read the Full Paper
Confronting Information Security’s Elephant, The Unintentional Insider Threat

Matthew Canham, Clay Posey, and Patricia S. Bockelman

It is well recognized that individuals within organizations represent a significant threat to information security as they are both common targets of external attackers and can be sources of malicious behavior themselves. Notwithstanding these facts, one additional aspect of human influence in the security domain is largely overlooked: the role of unintentional human error. Such lack of emphasis is surprising given relatively recent reports that highlight error’s central role in being the root cause for numerous security breaches. Unfortunately, efforts that recognize human error’s influence suffer from not employing a commonly accepted error framework and lexicon. We thus take this opportunity to review what the data show regarding error-based breaches across various types of organizations and create a nomenclature and taxonomy rooted in the rich history of safety research that can be applied to the information security domain. Our efforts represent a significant step in an effort to classify, monitor, and compare the myriad aspects of human error in information security in the hopes that more effective security education, training, and awareness (SETA) programs can be devised. Further, we believe our efforts underscore the importance of revisiting the daily demands placed on organizational insiders in the workplace.

Click Here to Read the Full Paper
The Enduring Mystery of the Repeat Clickers

Matthew Canham et al

Individuals within an organization who repeatedly fall victim to phishing emails, referred to as Repeat Clickers, present a significant security risk to the organizations within which they operate. The causal factors for Repeat Clicking are poorly understood. This paper argues that this behavior afflicts a persistent minority of users and is explained as either the main effect of individual traits (personality or others) or is a moderated interaction between traits and other factors such as cultural influences, situational factors, or social engineering techniques. Because Repeat Clickers represent a disproportionate risk, identifying causal factors and developing mitigations for this behavior should provide substantial return on investment to improving the security of an organization. Developing such mitigations will require a better understanding of the individual differences contributing to repeat clicking behavior. We present pilot data and suggest research questions to improve understanding of the contributing factors of repeated victimization by phishing emails.

Click Here to Read the Full Paper
Neurosecurity: Human Brain Electro-Optical Signals as MASINT

Matthew Canham, Ben D Sawyer

Applied neuroscience presently allows not only the scientific discovery-oriented probing of the inner workings of the mind, but increasingly the probing of individual minds toward gathering intelligence. Significant advances in neuroimaging, leveraging both active and passive electro-optical energy, can reveal specifics of information held in the mind even without cooperation (eg, Lange et al., 2018; Sawyer et al., 2016a). The processes of the brain increasingly join many other energetic sources from which quantitative and qualitative data analysis may extract identifying features and other useful intelligence (Sawyer & Canham, 2019). Indeed, it is increasingly appropriate to discuss the human brain as a system which can be read from, written to, and the operations of which may therefore be collected for analysis or influenced (Sawyer & Canham, 2019). Indeed, we argue here that we are witnessing the end of the era…

Click Here to Read the Full Paper
Developing Training Research to Improve Cyber Defense of Industrial Control Systems

Matthew Canham, Stephen M Fiore, Bruce D Caulkins

Cyber-attacks are a common aspect of modern life. While cyber based attacks can expose private information or shut down online services, some of the most potentially dangerous attacks change the sensor and control data utilized by Industrial Control Systems for the intended purpose of causing severe damage to the technical processes that these systems control. The damage caused by the Stuxnet worm is one of the most infamous examples of this type of attack. Because only the most advanced levels of adversaries are able to mount successful attacks against these systems, detecting them is extremely challenging. Automated detection systems have not yet evolved to the point of being capable of consistently and successfully detecting these attacks, and for this reason, human operators will need to be involved in Industrial Control Systems protection for the foreseeable future. We propose several potential training-based solutions to aid the defense of these systems.

Click Here to Read the Full Paper
A Computational Social Science Approach to Examine the Duality between Productivity and Cybersecurity Policy Compliance within Organizations

Clay Posey and Matthew Canham

Organizational employees often face conflicting responsibilities in their daily tasks. On one hand, employees must be productive members of their organization; on the other, they must perform their tasks while conforming to cybersecurity policies thereby causing a reduction in their performance rates. Such compliance can also lead to increases in stress, which might already be relatively high given the workload placed on the employees. In addition to this dichotomy, organizations vary significantly in the amount of emphasis placed on their productivity and cybersecurity goals. Employees use this and other information when making determinations about whether to follow cybersecurity policies for a given task. And while some of these determinations are based in rational cost-vs-benefits analyses, many are born out of habituation. Despite the importance of understanding individual-level decision making in regard to performance—both in productivity and compliance—little research has examined how such micro-level actions aggregate to macro-level phenomena within organizations. Given this opportunity, we explore how varying workload, productivity and compliance emphases (i.e., culture), and the degree by which compliance decreases productivity (i.e., friction) for a given task affects a simulated organization’s employees’ stress levels. Moreover, we investigate how these factors (including rationality vs habituation, morality) combine to form emergent noncompliance patterns at the organizational level.

Click Here to Read the Full Paper
Macrocognition Applied to the Hybrid Space: Team Environment, Functions and Processes in Cyber Operations

Øyvind Jøsok, Benjamin J. Knox, Kirsi Helkala, Kyle Wilson, Stefan Sütterlin, Ricardo G. Lugo & Terje Ødegaard

As cyber is increasingly integrated into military operations, conducting military cyber operations requires the effective coordination of teams. This interdisciplinary contribution discusses teams working in, and in relation to the cyber domain as a part of a larger socio-technical system, and the need for a better understanding of the human factors that contribute to individual and team performance in such settings. To extend an existing macrocognitive model describing functions and processes into a conceptual framework that maps cognitive processes along cyber-physical and tactical-strategic dimensions (the Hybrid Space) to gain a better understanding of environmental complexity, and how to operate effectively in a cyber team context. Current experience from conducting cyber network defence exercises at the Norwegian Defence Cyber Academy and implications for future education and training are discussed.

Click Here to Read the Full Paper
Exploring the Hybrid Space: Theoretical Framework Applying Cognitive Science in Military Cyberspace Operations

Øyvind Jøsok, Benjamin J. Knox, Kirsi Helkala, Ricardo G. Lugo, Stefan Sütterlin & Paul Ward

Operations in cyberspace are enabled by a digitized battlefield. The ability to control operations in cyberspace has become a central goal for defense forces. As a result, terms like cyber power, cyberspace operations and cyber deterrence have begun to emerge in military literature in an effort to describe and highlight the importance of related activities. Future military personnel, in all branches, will encounter the raised complexity of joint military operations with cyber as the key enabler. The constant change and complexity raises the demands for the structure and content of education and training. This interdisciplinary contribution discusses the need for a better understanding of the relationships between cyberspace and the physical domain, the cognitive challenges this represents, and proposes a theoretical framework - the Hybrid Space - allowing for the application of psychological concepts in assessment, training and action.

Click Here to Read the Full Paper