A Study of Information Security Awareness Program Effectiveness in Predicting EndUser Security Behavior by James Michael Banfield Write my thesis – Dissertation…

Any way i can tackle this assignment ? Totally cannot understand what’s a literature review. Please try to help.

Attachment 1

Attachment 2

Attachment 3

Attachment 4

Attachment 5

Attachment 6

Attachment 7

Attachment 8

Attachment 9

Attachment 10

Attachment 11

Attachment 12

ATTACHMENT PREVIEW

Download attachment

A Study of Information Security Awareness Program Effectiveness in Predicting EndUser.pdf

A Study of Information Security Awareness Program Effectiveness in Predicting EndUser Security Behavior

by

James Michael Banfield Write my thesis – Dissertation Submitted to the College of Technology

Eastern Michigan University

in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Write my thesis – Dissertation Committee:

Denise Pilato, Ph.D.

Bilquis Ferdousi, Ph.D.

Michael McVey, Ed.D

Tierney Orfgen McCleary, Ph.D. August 31, 2016

Ypsilanti, Michigan

ProQuest Number: 10250908

All rights reserved

INFORMATION TO ALL USERS

The quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript

and there are missing pages, these will be noted. Also, if material had to be removed,

a note will indicate the deletion.

ProQuest 10250908

Published by ProQuest LLC (2017 ). Copyright of the Write my thesis – Dissertation is held by the Author. All rights reserved.

This work is protected against unauthorized copying under Title 17, United States Code

Microform Edition © ProQuest LLC.

ProQuest LLC.

789 East Eisenhower Parkway

P.O. Box 1346

Ann Arbor, MI 48106 – 1346 ii

Dedication

I am honored to dedicate this effort to the two most influential people in my life,

my parents, Joyce and Richard Banfield. While both have passed on now, the lessons

they taught me growing up are still with me and helped to complete this project. I know

that they are watching from above with much pride. iii

Abstract

As accessibility to data increases, so does the need to increase security. For organizations

of all sizes, information security (IS) has become paramount due to the increased use of

the Internet. Corporate data are transmitted ubiquitously over wireless networks and have

increased exponentially with cloud computing and growing end-user demand. Both

technological and human strategies must be employed in the development of an

information security awareness (ISA) program. By creating a positive culture that

promotes desired security behavior through appropriate technology, security policies, and

an understanding of human motivations, ISA programs have been the norm for

organizational end-user risk mitigation for a number of years (Peltier, 2013; Tsohou,

Karyda, Kokolakis, & Kiountouzis, 2015; Vroom & Solms, 2004). By studying the

human factors that increase security risks, more effective security frameworks can be

implemented. This study focused on testing the effectiveness of ISA programs on enduser security behavior.

The study included the responses of 99/400 employees at a mid-size corporation.

The theory of planned behavior was used as model to measure the results of the tool.

Unfortunately, while data collected indicated that ISA does cause change in security

behavior, the data also showed no significance. Thus, we fail to reject the null hypothesis. iv

Table of Contents

Dedication …………………………………………………………………………………………………………… ii

Abstract ……………………………………………………………………………………………………………… iii

List of Tables …………………………………………………………………………………………………….. vii

List of Figures …………………………………………………………………………………………………… viii

Chapter I. Introduction ……………………………………………………………………………………………1

Background of the Study ……………………………………………………………………………….7

Importance of the Study ……………………………………………………………………………….10

Statement of the Problem ……………………………………………………………………………..10

Objective of the Study …………………………………………………………………………………11

Research Questions ……………………………………………………………………………………..11

Research Hypotheses …………………………………………………………………………………..12

Assumptions……………………………………………………………………………………………….13

Limitations and Delimitations……………………………………………………………………….13

Definitions………………………………………………………………………………………………….13

Summary ……………………………………………………………………………………………………16

Chapter II. Review of Literature …………………………………………………………………………….17

Introduction ………………………………………………………………………………………………..17

Information Security Awareness (ISA) ………………………………………………………….17

Security Misbehavior …………………………………………………………………………………..19

Theory of Planned Behavior (TPB) ……………………………………………………………….20

Attitude Toward Behavior (ATT) ………………………………………………………………….22

Subjective Norm (SN)………………………………………………………………………………….23 v

Self-Efficacy (SE) or Perceived Behavioral Control (PBC) ………………………………23

Computer Self-Efficacy (CSE) and Security Self-Efficacy (SSE) Domains ………..26

Other Behavioral Theories ……………………………………………………………………………29

ISA Research ……………………………………………………………………………………………..30

Summary ……………………………………………………………………………………………………31

Chapter III. Methods …………………………………………………………………………………………….32

Introduction ………………………………………………………………………………………………. 32

Research Design………………………………………………………………………………………….32

Population and Sample ………………………………………………………………………………..33

Humans Subjects Approval…………………………………………………………………………..33

Data Collection/Analysis ……………………………………………………………………………..34

Validation …………………………………………………………………………………………………..34

Personnel, Budget, Timeline …………………………………………………………………………38

Summary ……………………………………………………………………………………………………38

Chapter IV. Results ………………………………………………………………………………………………39

Introduction ………………………………………………………………………………………………..39

Normality …………………………………………………………………………………………………..39

Completion Rates ………………………………………………………………………………………..41

Demographics …………………………………………………………………………………………….41

Data Analysis ……………………………………………………………………………………………..43

Chapter 5. Conclusion(s) and Homework help – Discussion ………………………………………………………………..50

Summary…………………………………………………………………………..50

Conclusion …………………………………………………………………………………………………56 vi

Future research ……………………………………………………………………………………………56

References …………………………………………………………………………………………………………..60

Appendix A MOU………………………………………………………………………………………………..73

Appendix B Survey ………………………………………………………………………………………………76

Appendix C Letter to Population ……………………………………………………………………………81

Appendix D Human Subjects…………………………………………………………………………………82

Appendix E Consent Page …………………………………………………………………………………….83

Appendix F Item Level Frequency …………………………………………………………………………84

Appendix G Descriptive Data (Item) ………………………………………………………………………93 vii

List of Tables

Table Page 1 Definitions of ISA Strategies ………………………………………………………………………….6 2 Listing of Security Panel Comments ……………………………………………………………..36 3 Cronbach Alpha Test for Reliability ………………………………………………………………37 4 Combined Antecedent Cronbach Alpha Scores ……………………………………………… 38 5 Transformed Data ……………………………………………………………………………………….40 6 Demographic Information, Gender and Age Frequency Report …………………………42 7 Demographic Information, Education Report………………………………………………….43 8 Model Summary…………………………………………………………………………………………..44

Coefficients …………………………………………………………………………………………………44

Pearson Correlation of ATT on SI ………………………………………………………………….46

Pearson Correlation of SN on SI…………………………………………………………………….47

Pearson Correlation of PBC on SI ………………………………………………………………….48 viii

List of Figures

Figure Page 1 Construct of Ajzen’s TPB theory……………………………………………………………………..5 2 Illustration of the intersection of security expertise and intention ………………………..8 3 Illustration of the study hypotheses ……………………………………………………………….12 4 Formulas for skewness and kurtosis ………………………………………………………………39 5 Histogram images of data distribution ……………………………………………………………41 6 The corrected reliable hypothesis table (antecedents changed for validity) …………44 7 Pearson correlation scatterplot SI/ATT ………………………………………………………….45 8 Pearson correlation scatterplot SI/SN …………………………………………………………….48 9 Pearson correlation scatterplot SI/PBC …………………………………………………………..47 10 Correlations of all studied constructs ……………………………………………………………..49 Chapter I: Introduction

Abundant research suggests that individual users play a critical role in the security of

information systems and that no solution can be solely based in technology (Brdiczka et al.,

2012; Crossler et al., 2013; Dhillon, Syed, & Pedron, 2016; Hsu, Shih, Hung, & Lowry,

2015). Cybercriminals (aka hackers) typically employ well-known social engineering tricks

(the act of persuading users into careless security behaviors) such as malware, email

phishing, and other behavior-related tactics in order to circumvent technical security

solutions (Mann, 2012). Such “social engineering” continues to plague end-users, despite the

existence of a breadth of information and countermeasures that help promote prudent security

behavior (Furnell & Moore, 2014). It follows that informed awareness and an understanding

of the types of behaviors that compromise security are key ingredients for a successful riskmitigation program (Goodhue & Straub, 1991; Siponen & Oinas-Kukkonen, 2007; Viduto,

Maple, Huang, & López-Peréz, 2012).

Both technological and human strategies must be employed in the development of an

information security awareness (ISA) program. By creating a positive culture that promotes

desired security behavior through appropriate technology, security policies, and an

understanding of human motivations, ISA programs have been the norm for organizational

end-user risk mitigation for a number of years (Peltier, 2013; Tsohou, Karyda, Kokolakis, &

Kiountouzis, 2015; Vroom & Solms, 2004). It is therefore interesting to analyze whether ISA

programs are effective in building desired end-user security behavior and whether they

deliver on the promise of more secure user actions within the organization.

As accessibility to data increases, so does the need to increase security. For

organizations of all sizes, information security (IS) has become paramount due to the 2

increased use of the Internet. Corporate data are transmitted ubiquitously over wireless

networks and have increased exponentially with cloud computing and growing end-user

demand. This swing can be seen in the vast increase in the number of cybercrime-related

incidents in the past few years. According to Brahme and Joshi (2013), cybercrime increased

steadily every year from 1998 to 2013, with IS events peaking at over 3.5 million reported

incidents in 2013. IS seeks to protect data under the confidentiality, integrity, and availability

(CIA) model that has been in place since 1969 (Howe, 1978) and which is still used as a

framework for today’s security programs (Younis & Kifyat, 2013).

The three tenets of the CIA model embrace both technological and behavioral

components of security: Confidentiality allows information to be used or seen only by

intended targets; integrity dictates that data will be unchanged between author and consumer;

and availability ensures that systems are up and able to provide information when called

upon (Whitman & Mattord, 2011). The large majority of risk mitigation strategies are built

on the CIA framework, and current research focuses more on the human components of the

model (Alfawaz, Nelson, & Mohannak, 2010). This focus on human factors strays from the

more traditional technological approach toward security.

A technologically-driven philosophy of cyber security is grounded in the theory that

innovative technology builds stronger defenses against data loss and that human error can be

curbed with deterrence. However, it has been shown that an organization’s dependence upon

deterrence and technical solutions to alleviate security risk is a vast oversight, as other human

behavioral factors must be considered (Balcerek, Frankowski, Kwiecień, Smutnicki, &

Teodorczyk, 2012; Crossler et al., 2013; Hu et al., 2011), and research that focuses on secure

end-user habits is increasing (Alfawaz et al., 2010; Siponen, Mahmood, & Pahnil, 2014). 3

Such an approach proactively compensates for the many unanticipated factors (born in

human carelessness) that compromise security and for which technology continues to fall

short.

For instance, the problem with a penalty deterrent model is that it assumes all security

attacks are done with malicious intent, ignoring the capricious idiosyncrasies of accidental

events (D’Arcy, Hovav, & Goalletta. 2011; Desman, 2013; Guo, Yuan, Archer, & Connelly.

2011). A better solution is to develop an ISA program creating a culture of security

awareness by combining technology, security policy, and an understanding of human

behavior. Increasing employee awareness of how to protect data in both technical and human

terms has been found to be the best risk-mitigation strategy within an organization, reducing

the need, cost, and frustration of planning for every conceivable contingency (Bulgurcu et al.,

2010; D’Arcy et al., 2009; Pahnila, Karjalainen, & Siponen, 2013). With these factors in

mind, ISA would seem to be a more sensible alternative to the traditional technologicallydriven approach to cybersecurity.

Abundant research supports the use of ISA as an effective method for risk-management

programs (Ciampia, 2103; Mylonas, Kastania, & Gritzalis, 2013; Peltier, 2013), but research

is lacking as to whether it truly promotes secure end-user habits. There is little to no research

that looks at data loss, accidental or malicious, and how it relates to the habitual tendencies of

end-users as moderated by ISA in mid-sized organizations. More specifically, it would be

beneficial to the future of cybersecurity to analyze ISA’s contribution to information security

risks and human factors in the corporate environment. By shedding light on the human

factors that increase security risks, more effective security frameworks can be implemented

hand in hand with the development of risk-mitigation strategies (Lin, 2010; Siponen et al., 4

2014; Whittman & Mattford, 2011). Such an analysis would seem to be critical toward

understanding the true potential of ISA in effectively deterring cyber-attacks in the corporate

setting.

Another factor that must be considered is that different-sized organizations require

different security solutions. Since organizations vary greatly in staff size, budget, and culture,

they present many of their own characteristic security challenges. This particular study will

review cyber security in a single midsize organization and thus create a tool to measure the

effects of ISA programs in other midsize organizations. A midsize company is defined by

Gartner (2014), the leading IT analytics and metric organization in the world, as one that has

100–999 employees (end-users) with annual revenue of more than $50 million but less than

$1 billion. An end-user is defined as the person for whom a hardware or software solution is

designed. The terms organization and company will be treated with equal meaning in this

document.

Organizational security behavior, or security hygiene, is the set of information data

protection expectations that a company places on the end-user as part of security practice. A

security event is a change from the operational norm of information systems or services that

violates typical security policy, safeguards, or technology (Whittman & Mattford, 2011). As

a consequence, technical and human security controls vary with the number of end-users and

the type of data to be secured (Vroom & Von Solms, 2004). However, end-users of digital

data do share similar security concerns, regardless of the size of an organization or the type

of data, since data loss in any organization could be catastrophic (Whittman & Mattford,

2011). Hence, tactics for diligent planning and the constant assessment of behavioral traits

that compromise company security would translate well to any company size or setting. 5

This study will extend Ajzen’s (1985) theory of planned behavior (TPB) to study the

effect of ISA on end-user behaviors. Ajzen’s research found that by finding an individual’s

intention, one could, in turn, predict behavior. A survey will collect data on the three main

constructs (Fig 1) of TPB for a single midsize company that deploys an ISA program as a

part of its security strategy. The results of the research will be limited to the company in

question, as all ISA programs are deployed with some variation. The tool, however, could be

used as a predictor of all midsize companies.

TPB constructs include attitude toward behavior (ATT), subjective norm (SN), and

perceived behavioral control (PBC; Ajzen, 1985). ATT is a measure of how important the

behavior in question is to the individual and is formed from Davis’s (1989) technology

acceptance model, specifically ease of use and perceived usefulness. SN is a social

measurement that examines the social burden (driven by peer and supervisor influences) to

perform or not perform a certain behavior. PBC is built upon Bandura’s (1977) tested and

proven theory of perceived self-efficacy being a key foundation to behavior (Ajzen, 1980,

1985). Figure 1. Construct of Ajzen’s TPB theory. 6

When data loss occurs from within a company, experts categorize it as an internal

threat. Internal threats come in two major forms—intentional harm and misuse—but both

forms result in data loss and/or service outage (Siponen, Mahmood, & Pahnil, 2014).

Predictably, the nomenclature used to describe an organization’s actions to mitigate threats

describes defensive measures, while attacks, either intentional or unintentional, are described

and classified as offensive threats (Lin, 2010). Table 1 describes some current tactics that

companies use to deter internal threats, including end-user behavioral measures and ISA, the

focus of this research (Ahmad, Maynard, & Park, 2012; Whitman & Mattord, 2013). Table 1

illustrates broad organizational defense tactics that preceded end-user security measures.

Table 1

Definitions of ISA Strategies

Information Security Awareness

Organizational Information Security technology

deployed: hardware/software tools used to mitigate

security events

Organizational Information Security

awareness/culture: the security culture of the

organization

Organizational Information Security knowledge:

knowledge level of security topics (the other

constructs)

Security Self-efficacy: the end-users own selfconfidence to be and act securely

Policy, Governance, and Compliance: An

integrated approach used by corporations to act in

accordance with the guidelines set for each data and

system protection within given vertical markets.

Benign detrimental security behavior: Unintentional

behavior which could lead, or has led, to a security

event. Operational measurement

End-user awareness of installed technology such as

firewalls, intrusion detection, access controls, and

other deployed tools.

End-user awareness of corporate security

environment. Is security an “all” corporate norm, or

the responsibility of few?

End-user understanding and knowledge of

organizational security tools and techniques.

End-user knowledge of how security tools work,

attack and defend techniques, and organizational

risk structure.

End-user knowledge of security policy & guidelines

that are deployed at a given organization User survey response on behavioral practice in

information security

* End-user resistance to social engineering

* End-user data privacy, use of encryption

* End-user handling of virus/malware 7

Background of the Study

Current research demonstrates that security is not simply a technology problem but is

primarily a people problem caused by malicious intent, carelessness, or accident (Desman,

2013; Kim, Lee, Chun, & Benbasat, 2014; Peltier, 2013; Whitman & Mattord, 2013). For

example, in January 2013, The Wall Street Journal reported on a malicious insider event by

which 150 million private records containing social security numbers, financial information,

and other private data had been stolen by four employees from the database servers of Dun

and Bradstreet and sold for profit (Chu, 2013). In another example of malicious insider

behavior leading to extreme data loss, DatalossDB.org (2014) reported that credentials for

104 million credit cards were stolen from the Korean Credit Bureau from inside employees

and were later used to purchase more than $20 million worth of goods. In an example of

accidental loss, the State of Texas released the social security numbers of 6.5 million

registered voters in 2012 (DatalossDB.org, 2013). In 2011, the Texas Comptroller of Public

Schools accidentally exposed 3.5 million teacher records that included salary, social security

numbers, and other sensitive data to the public Internet (Shannon, 2011). There are literally

thousands of such reports of data loss that range from small to large company security issues

(DatalossDB.org, 2013). In the majority of cases, data loss can be attributed to human error

or malicious intent (Spears & Barki, 2010). For this reason, research into the effectiveness of

ISA on end-users and the promotion of a cyber-secure working environmen…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

An empirical assessment of user online security behavior.pdf

ABSTRACT

Title of Document: AN EMPIRICAL ASSESSMENT OF USER

ONLINE SECURITY BEHAVIOR:

EVIDENCE FROM A UNIVERSITY

Sruthi Bandi, Master of Information

Management, 2016 Directed By: Dr. Michel Cukier, A. James Clark School of

Engineering

Dr. Susan Winter, College of Information

Studies The ever-increasing number and severity of cybersecurity breaches makes it vital to

understand the factors that make organizations vulnerable. Since humans are

considered the weakest link in the cybersecurity chain of an organization, this study

evaluates users’ individual differences (demographic factors, risk-taking preferences,

decision-making styles and personality traits) to understand online security behavior.

This thesis studies four different yet tightly related online security behaviors that

influence organizational cybersecurity: device securement, password generation,

proactive awareness and updating. A survey (N=369) of students, faculty and staff in

a large mid-Atlantic U.S. public university identifies individual characteristics that

relate to online security behavior and characterizes the higher-risk individuals that

pose threats to the university’s cybersecurity. Based on these findings and insights

from interviews with phishing victims, the study concludes with recommendations to

help similat organizations increase end-user cybersecurity compliance and mitigate

the risks caused by humans in the organizational cybersecurity chain. AN EMPIRICAL ASSESSMENT OF USER ONLINE

SECURITY BEHAVIOR: EVIDENCE FROM A

UNIVERSITY By Sruthi Bandi Thesis submitted to the Faculty of the Graduate School of the

University of Maryland, College Park in partial fulfilment

of the requirements for the degree of

Master of Information Management

2016 Advisory Committee: Dr. Susan Winter, Co-chair

Dr. Michel Cukier, Co-chair

Dr. Brian Butler, Committee Member

Dr. Jessica Vitak, Committee Member ProQuest Number: 10161071 All rights reserved

INFORMATION TO ALL USERS

The quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript

and there are missing pages, these will be noted. Also, if material had to be removed,

a note will indicate the deletion. ProQuest 10161071

Published by ProQuest LLC (2016). Copyright of the Write my thesis – Dissertation is held by the Author.

All rights reserved.

This work is protected against unauthorized copying under Title 17, United States Code

Microform Edition © ProQuest LLC.

ProQuest LLC.

789 East Eisenhower Parkway

P.O. Box 1346

Ann Arbor, MI 48106 – 1346 © Copyright by

Sruthi Bandi

2016 ACKNOWLEDGEMENTS

This thesis journey has been a challenging yet an immensely gratifying and a very

rewarding learning experience. I would like to take this opportunity to thank everyone

who have made this happen.

Foremost, I would like to thank my advisors, Dr. Michel Cukier and Dr. Susan

Winter, who have not only served as my thesis chairs, but also guided, challenged,

and encouraged me throughout the process. My advisors and other committee

members, Dr. Brian Butler and Dr. Jessica Vitak have patiently assisted me and

offered extremely valuable insights from varied perspectives, which has always

challenged me to perform better. Thank you all for the extensive guidance.

I would like to thank my research team, Dr. Josiah Dykstra and Amy Ginther,

who were instrumental in the design and execution of this study. Thank you for the

persistent support and valuable feedback. I truly appreciate you both taking effort and

time to read and edit the thesis drafts. A special thanks to you Amy for all the hard

work on the infinite number of approvals and data requests. I couldn’t have done it

without you. I would also like to thank Margaret, Anmol and Fiona for the help on the

writing.

I would like to ackowledge the funding from the Department of Defense for

my research. I would also like to thank the members in the Division of IT for

providing me with the required data and infrastructure to carry out the study.

I owe my deepest thanks to my family – the Bandi’s, the Chikkam’s and the

Cheruvu’s – for their hope in my quests and unconditional love. In particular, my

pillars of strength, Amma, Nanna, Aadi and Chintu for always believing in me and

standing by my side. The belief they have in me is what drives me everyday and I can

never thank them enough in my life.

ii TABLE OF CONTENTS

LIST OF TABLES …………………………………………………………………………………….. V

LIST OF FIGURES ………………………………………………………………………………….. VI

1. INTRODUCTION ………………………………………………………………………………….. 1

2. LITERATURE REVIEW ………………………………………………………………………… 5

2.1. USER SECURITY BEHAVIOR …………………………………………………………………… 5

2.2. DECISION-MAKING ……………………………………………………………………………… 8

2.3. RISK-TAKING PREFERENCES ………………………………………………………………….. 9

2.4. DECISION-MAKING STYLES …………………………………………………………………. 10

2.5. PERSONALITY TRAITS ………………………………………………………………………… 11

2.6. DEMOGRAPHIC FACTORS …………………………………………………………………….. 13

3. RESEARCH MODEL AND HYPOTHESIS ……………………………………………. 16

3.1. THESIS STATEMENT ……………………………………………………………………………. 16

3.2. RESEARCH QUESTIONS ……………………………………………………………………….. 16

3.3. RESEARCH MODEL …………………………………………………………………………….. 17

3.4. HYPOTHESES ……………………………………………………………………………………. 18

4. METHODS ………………………………………………………………………………………….. 22

4.1. PROCEDURES ……………………………………………………………………………………. 22

4.1.1. Surveys ……………………………………………………………………………………………………………………………….. 23

4.1.2. Interviews ………………………………………………………………………………………………………………………….. 23 4.2. MEASURES……………………………………………………………………………………….. 24

4.2.1. Surveys ……………………………………………………………………………………………………………………………….. 24 4.3. DATA ANALYSIS ……………………………………………………………………………….. 29

5. RESULTS…………………………………………………………………………………………….. 30

5.1. FACTOR ANALYSIS AND RELIABILITY TESTING ……………………………………….. 30

5.2. DESCRIPTIVES …………………………………………………………………………………… 32

5.3. MULTIPLE REGRESSION ANALYSIS ……………………………………………………….. 33

5.3.1. Device Securement ………………………………………………………………………………………………………….. 33

5.3.2. Password Generation ……………………………………………………………………………………………………… 35

5.3.3. Proactive Awareness ………………………………………………………………………………………………………. 36

5.3.4. Updating ……………………………………………………………………………………………………………………………. 38 5.4. USER ONLINE SECURITY BEHAVIOR BY DEMOGRAPHICS ……………………………. 41

5.4.1. Age ………………………………………………………………………………………………………………………………………. 41

5.4.2. Gender ………………………………………………………………………………………………………………………………… 43

5.4.3. Role ……………………………………………………………………………………………………………………………………… 43

5.4.4. Majors…………………………………………………………………………………………………………………………………. 45

5.4.5. Citizenship …………………………………………………………………………………………………………………………. 46

5.4.6. Employment Length in the university ………………………………………………………………………… 47 5.5. NON-RESPONSE ANALYSIS ………………………………………………………………….. 48

5.6. INTERVIEW ANALYSIS ………………………………………………………………………… 49

6. DISCUSSION……………………………………………………………………………………….. 53

6.1. DEVICE SECUREMENT ………………………………………………………………………… 53

6.2. PASSWORD GENERATION…………………………………………………………………….. 54

6.3. PROACTIVE AWARENESS …………………………………………………………………….. 57

iii 6.4. UPDATING ……………………………………………………………………………………….. 59

6.5. RECOMMENDATIONS ………………………………………………………………………….. 62

7. CONCLUSION …………………………………………………………………………………….. 65

7.1. SUMMARY ……………………………………………………………………………………….. 65

7.2. LIMITATIONS ……………………………………………………………………………………. 66

7.3. FUTURE RESEARCH ……………………………………………………………………………. 67

8. APPENDIX ………………………………………………………………………………………….. 68

8.1. APPENDIX A – SURVEY INSTRUMENT…………………………………………………….. 68

8.2. APPENDIX B – INTERVIEW PROTOCOL & OBSERVATION FORM …………………… 77

8.3. APPENDIX C – CORRELATION MATRIX BETWEEN PREDICTOR AND OUTCOMES .. 79

8.4. APPENDIX D – MEANS AND STANDARD DEVIATIONS FOR ALL CONTINUOUS

PREDICTORS AND OUTCOMES……………………………………………………………………… 81

9. REFERENCES …………………………………………………………………………………….. 82 iv List of Tables

TABLE 1: FACTOR LOADINGS FOR 16 ITEMS OF THE SEBIS SCALE (N = 369) ………………..30

TABLE 2: DEMOGRAPHIC DATA (N=369) …………………………………………………………………………..32

TABLE 3: REGRESSION RESULTS FOR ONLINE SECURITY BEHAVIOR OF DEVICE

SECUREMENT ………………………………………………………………………………………………………………….34

TABLE 4: REGRESSION RESULTS FOR ONLINE SECURITY BEHAVIOR OF PASSWORD

GENERATION …………………………………………………………………………………………………………………..35

TABLE 5: REGRESSION RESULTS FOR ONLINE SECURITY BEHAVIOR OF PROACTIVE

AWARENESS ……………………………………………………………………………………………………………………37

TABLE 6: REGRESSION RESULTS FOR ONLINE SECURITY BEHAVIOR OF UPDATING………..38

TABLE 7: SUMMARIZING THE REGRESSION ANALYSIS COEFFICIENTS ……………………………..40

TABLE 8: MEAN DIFFERENCES IN SECURITY BEHAVIOR BY AGE ……………………………………..41

TABLE 9: MEAN DIFFERENCES IN SECURITY BEHAVIOR BY GENDER ………………………………43

TABLE 10: MEAN DIFFERENCES IN SECURITY BEHAVIOR BY ROLE………………………………….44

TABLE 11: ANCOVA ON SECURITY BEHAVIOR BY ROLE CONTROLLED BY AGE………….44

TABLE 12: MEAN DIFFERENCES IN SECURITY BEHAVIOR BY MAJOR ………………………………45

TABLE 13: MEAN DIFFERENCES IN SECURITY BEHAVIOR BY CITIZENSHIP ……………………..46

TABLE 14: MEAN DIFFERENCES IN SECURITY BEHAVIOR BY EMPLOYMENT LENGTH …..47

TABLE 15: IDENTIFIED PROBLEM AREAS AFFECTING SECURITY OF THE ORGANIZATION 50

TABLE 16: RESULTS OF HYPOTHESIS TESTING FOR DEVICE SECUREMENT………………………54

TABLE 17: RESULTS OF HYPOTHESIS TESTING FOR PASSWORD GENERATION ………………..56

TABLE 18: RESULTS OF HYPOTHESIS TESTING FOR PROACTIVE AWARENESS …………………58

TABLE 19: RESULTS OF HYPOTHESIS TESTING FOR UPDATING …………………………………………60

TABLE 20: OVERALL SUMMARY OF THE RESULTS TESTING THE RESEARCH MODEL ……..61 v List of Figures

FIGURE 1: THE FACTORS THAT INFLUENCE USER SECURITY BEHAVIOR (TAKEN FROM

LEACH, 2003) ……………………………………………………………………………………….. 8

FIGURE 2: RESEARCH MODEL ……………………………………………………………………….. 17 vi 1. Introduction

Cybercrime is a persistent problem, and the increase in the victimization of users in

recent years is alarming (Interpol, 2015). A 2013 survey from the Pew Research

Center reveals that 11% of Internet users have experienced theft of vital personal

information, and 21% had an email or social networking account compromised

(Rainie et al., 2013). The continual increase in the detection of information security

compromise incidents emphasizes this unrelenting problem. PricewaterhouseCoopers

(PWC), in its annual Global State of Information Security Survey, reports an overall

38% increase in detection of security incidents in 2015 from 2014 (PWC, 2015). The

survey also noted that employees are the most-cited source of cybersecurity

compromise in the organizations. Human vulnerability is widely accepted as a significant factor in

cybersecurity. Recently, a Wall Street Journal story asserted that humans are the

weakest link in the cybersecurity chain, and that this weakest link can be turned into

the strongest security asset if the right actions are taken (Anschuetz, 2015). To

understand how this weakest link, the user, could be turned into a strongest asset, it is

important to examine the underlying factors that influence user cybersecurity

behavior. There are broad categories of cybersecurity attacks ranging from money

laundering to social engineering fraud (Interpol, 2015) that take advantage of the

human vulnerabilities in cybersecurity. For example, social engineering frauds

involve scams used by criminals to deceive the victims into giving out personally

1 identifiable information or financial information. Phishing is one of the most common

kinds of cybersecurity attacks and is used as an example here (US-Cert, 2013).

Phishing attacks use fake websites, emails or spam to lure and capture a person’s

personal information. Phishers take advantage of the Internet and its anonymity to

commit a diverse range of criminal activities. The types of phishing attacks are

evolving over time and the Anti-Phishing Working Group, a coalition unifying the

global response to cybercrime across industries, states in their latest report that as

many as 173,262 unique phishing reports have been submitted in the fourth quarter of

2015 (Anti-Phishing Working Group, 2016). These attacks are particularly sensitive

to human reactions because for an attack to be successful, the human target must fall

for the deception. Hence, it is very important to study and understand human behavior

to reduce the damages of phishing and similar cybersecurity attacks. Falling for cybersecurity attacks such as phishing involves a user deciding to

click on a link or reply to an email; hence, understanding technology-based decisionmaking processes should help understand why individuals fall victim to phishing

scams and similar cybersecurity attacks. Psychology researchers have studied how

individual differences affect decision-making, and specifically how a particular

behavior is correlated with individuals’ attitudes towards risk (Appelt et al., 2011). If

some individual factors are also predictive of user security behavior, then those

factors can be emphasized to customize security training and to improve outcomes. However, studying and analyzing human behavior that poses a threat to the

organization’s cybersecurity in real-world situations is challenging, since most

organizations do not make data about their cybersecurity attacks and compromises 2 publicly available. This study represents a unique opportunity to conduct research into

the population of a large public university in the mid-Atlantic region of the United

States that has been a repeated object of phishing attacks, and understand the various

factors that could impact decision-making and user security behavior. The overarching research question that drives this study is, “What are the

factors that influence users’ online security behavior?” The user security behaviors

related to online security such as securing devices, generating good passwords and

updating them, being proactively aware of cybersecurity threats and keeping software

up-to-date are examined in this thesis. Relationships between the individual

differences in users (risk-taking preferences, decision-making styles, personality

traits, and demographics) and these online security behaviors are explored. Users’

falling for phishing is one of the top concerns for the university studied, and hence a

group of identified phishing victims are studied to gain insights into the factors that

may have influenced their victimization. This study moves beyond existing literature on user online security behavior

and individual differences by including personality traits and university-level

demographic factors that have not been previously investigated. While we studied

online security behaviors applicable to general users’ online behaviors (which

includes personal devices too), such behavior relates to organizational cybersecurity

because of the connectivity of devices in today’s world and the freedom of connecting

personal devices to an organization’s network. For example, practices like BYOD

(Bring Your Own Device) at work enables employees to use their personal devices in

the organization. With such interconnectivity of devices, users’ online security 3 behaviors will impact organizational cybersecurity.This study, based on the findings

from the relationships between individual differences and online security behaviors,

and insights from interviews with identified phishing victims, makes

recommendations that can be adopted in similar organizations to create better security

messaging strategies to achieve higher end-user organizational cybersecurity

compliance. 4 2. Literature Review

This section begins with explaining the online user security behaviors that are

examined in this study: securing devices, generating good passwords and updating

them, being proactive aware of cybersecurity threats and keeping software up-to-date.

It further describes the individual differences in risk-taking preferences, decisionmaking styles, personality traits and demographics. Since the exploration of how

these individual differences in terms of psychometrics correlate with security attitudes

and behaviors has only very recently begun (Egelman et al., 2015), this thesis draws

heavily on the phishing literature as it is the best developed research stream on

behavioral decision-making and cybersecurity addressing the human element.

Therefore, inferences are drawn from the phishing literature on the personality traits,

decision-making styles, risk-taking preferences and demographics to build the

research model linking individual differences to online security behaviors. 2.1. User Security Behavior

There are three broad categories of user behaviors that are related to security

behavior: Risk-averse behavior, naive or accidental behavior, and risk-inclined

behavior (Stanton et al., 2005). For example, leaving a computer unattended or

accessing dubious websites can be categorized as naive behavior, while always

logging off the computer when unattended or changing passwords regularly can be

categorized as risk-averse behavior (Pattinson and Anderson, 2007). Risk-inclined or

deliberate behavior would include behaviors such as hacking into other people’s

accounts or writing and sending malicious code (Pattinson and Anderson, 2007). 5 The subset of user security behaviors considered in this study – securing

devices, generating good passwords and updating them, being proactive aware of

cybersecurity threats and keeping software up-to-date – fall under the categories of

risk-averse and naive behavior. Vendors include features in many of their devices that allow them to be

“locked” making them unusable without a PIN or password. Often these features must

be enabled by the user. Enabling these features increases the users’ online

cybersecurity. Device Securement corresponds to such behaviors as locking one’s

computer and mobile device screens or using a PIN or password to lock one’s devices

(Egelman et al., 2015). Online account vendors emphasize the importance of generating strong

passwords and updating passwords regularly to ensure security of the accounts. Most

vendors encourage creation of strong passwords by mandating the usage of at least

one special character, or by forcing alpha-numeric usage in the passwords. Password

Generation in this study refers to the practices of choosing strong passwords, not

reusing passwords between different accounts, and changing passwords (Egelman et

al., 2015). With the exponential growth of cyber threats, creating and promoting

awareness of these threats is a key agenda for organizations world-wide (PWC, 2015).

For example, in phishing attacks, the victimization involves a user’s decision to click

on a spurious link and falling victim to the attack. Proactive Awareness indicates the

users paying attention to contextual clues such as the URL bar or other browser 6 indicators in websites or email messages, exhibiting caution when submitting

information to websites and being proactive in reporting security incidents (Egelman

et al., 2015). Software vendors often provide customers with security patches and updates

to keep their systems from being less vulnerable to cyber attacks. In most of these

updates, a user must make the decision of choosing to update when prompted.

Applying these patches and updates enables higher online cybersecurity. Updating

measures the extent to which someone consistently applies security patches or

otherwise keeps their software up-to-date (Egelman et al., 2015). Examining and understanding the factors that influence these online security

behaviors of device securement, password generation, proactive awareness and

updating will enable identification of organizational IT users who may be creating

vulnerabilities that can be exploited. As shown in Figure 1, there are many factors that

influence user security behavior. Since the aim of this thesis is to understand the enduser cybersecurity behavior and not the overall organizational security, the focus is on

the users’ decision-making skills and not on the other factors like policies, values and

standards. 7 Figure 1: The factors that influence user security behavior (taken from Leach, 2003) 2.2. Decision-Making

Decision-making and user behavior that relate to general cybersecurity have been

most extensively studied in connection with decision strategies and

perceived/observed susceptibility to phishing (Ng et al., 2009; Leach, 2003). So we

draw on this literature to guide hypothesis development. Understanding the individual

differences in users that affect their decision to perform a security behavior will

enable customization of security training to improve outcomes (Blythe et al., 2011).

The Decision-making Individual Differences Inventory (DIDI) lists an extensive set

of individual differences measures of risk attitudes and behavior, decision styles,

personality traits, etc. (Appelt et al., 2011). Three sets of individual differences or

psychometrics from DIDI – risk-taking preferences, decision-making styles and

personality traits – are studied extensively in relation to phishing. The following

sections explain these…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

Changing users’ security behaviour towards security.pdf

Changing users’ security behaviour towards security

questions: A game based learning approach

Nicholas Micallef Nalin Asanka Gamagedara Arachchilage Australian Centre for Cyber Security

School of Engineering and Information Technology

University of New South Wales

Canberra, Australia

n.micallef@adfa.edu.au Australian Centre for Cyber Security

School of Engineering and Information Technology

University of New South Wales

Canberra, Australia

nalin.asanka@adfa.edu.au Abstract— Fallback authentication is used to retrieve forgotten

passwords. Security questions are one of the main techniques

used to conduct fallback authentication. In this paper, we

propose a serious game design that uses system-generated

security questions with the aim of improving the usability of

fallback authentication. For this purpose, we adopted the

popular picture-based “4 Pics 1 word” mobile game. This game

was selected because of its use of pictures and cues, which

previous psychology research found to be crucial to aid

memorability. This game asks users to pick the word that relates

to the given pictures. We then customized this game by adding

features which help maximize the following memory retrieval

skills: (a) verbal cues – by providing hints with verbal

descriptions; (b) spatial cues – by maintaining the same order of

pictures; (c) graphical cues – by showing 4 images for each

challenge; (d) interactivity/engaging nature of the game. ease of conducting observational and guessing attacks has

increased the vulnerabilities of fallback authentication

mechanisms [4] towards all these cyber-threats, which are

leading to severe consequences, such as monetary loss,

embarrassment and inconvenience [5]. Keywords – Cyber Security, Fallback Authentication; Security

Questions, Serious Games, Memorability. Thus, to address this problem with memorability of systemgenerated data, in this paper we present a game design that

focuses on enhancing users’ memorability of answers to

security questions. This paper investigates the elements

(obtained from the literature [7] [8] [9] [10]) that should be

addressed in the game design to create and consequently

nurture the bond between users and their avatar profiles

(system-generated data). For the purpose of our research, we

adopted the popular picture-based “4 Pics 1 Word” 1 mobile

game. This game asks users to pick the word that relates the

given pictures (e.g., for the pictures in Figure 2a the relating

word would be “Germany”). This game was selected because

of its use of pictures and cues, in which, previous psychology

research has found to be important to help with memorability

[7] [11]. I. INTRODUCTION Republican vice presidential candidate Sarah Palin’s

Yahoo! email account was “hijacked" in the run-up to the 2008

US election. The “hacker" simply used the password reset

prompt and answered her security questions [1]. As reported

[1], the Palin hack didn’t require much technical skills. Instead,

the hacker merely used social engineering techniques to reset

Palin’s password using her birthdate, ZIP code and information

about where she met her spouse. The answers to these

questions were easily accessible with a quick Google search.

Also, as more of our personal information is available online, it

is becoming easier for attackers to retrieve this information,

through observational attacks, from social networking

websites, such as Facebook [2], Twitter or even more

professional websites like LinkedIn [3]. Besides observational

attacks, security questions are also vulnerable to guessing

attacks, in which, attackers try to access accounts by providing

low entropy (i.e., level of complexity) answers (e.g., favorite

color: blue). These attacks are part of a series of Cyber-threats

which usually include computer viruses and other types of

malicious software (malware), unsolicited e-mail (spam),

eavesdropping software (spyware), orchestrated campaigns

aiming to make computer resources unavailable to the intended

users (distributed denial-of-service (DDoS) attacks), social

engineering, and online identity theft (phishing). Hence, the A possible way to reduce the vulnerability of security

questions towards these kind of attacks is by encouraging users

to use system-generated answers [5]. One particular technique

uses an Avatar to represent system-generated data of a

fictitious person (see Figure 1), and then the Avatar’s systemgenerated data is used to answer security questions [5].

However, the main barrier towards widespread adoption of

these techniques is memorability [6], since users struggle to

remember the details of system-generated information to

answer their security questions. For the purpose of our research we adopted the game, so

that at certain intervals, it asks users to solve avatar-based

challenges. Since previous research on memorability found that

recognition is a simpler memory task than recall [12], besides

recall-based challenges (see Figure 3a), in our game, we also

provide recognition-based challenges (see Figure 3b). Hence,

the proposed game design focuses on encoding the systemgenerated data to users’ long-term memory [11] and to aide

memorability by using the following memory retrieval skills

[13]: (a) graphical cues – by using images in each challenge; (b)

1 https://play.google.com/store/apps/details?id=de.

lotum.whatsinthefoto.us&hl=en verbal cues – by using verbal descriptions as hints; (c) spatial

cues – by keeping same order of pictures; and (d) interactivity interactive/engaging nature of the game through the use of

persuasive technology principles [9].

In the following sections, we describe the fallback

authentication mechanisms that are currently being used. We

then identify the strengths and weaknesses of research on

security questions to show why our research is important and

how it is considerably different from previous research that has

been conducted in this field. Afterwards, we describe the main

contribution of this paper, which is a unique game design that

uses gamification and memorability concepts to improve the

memorability of fallback authentication. Finally, we conclude

this paper by presenting the prototype that we will use to

evaluate the proposed game design in a lab study.

II. BACKGROUND As computer users have to deal with an increasing number

of online accounts [14] [15] they are finding it more difficult to

remember all passwords for their different accounts. For

example, if we look just at social networking websites, plenty

of users have different accounts for Facebook, Twitter,

Instagram, SnapChat and LinkedIn. Since password managers

have not been widely adopted [16], resetting of passwords is

becoming a more frequent task [14] [15]. To address this

problem various forms of fallback authentication mechanisms

have been evaluated with the most popular being security

questions [17] (focus of this research) and email-based

password reset. Although email-based (or in some cases even

SMS-based) password recovery has been widely adopted by

major organizations (e.g., Google) they still have the limitation

of being vulnerable to ‘man in the middle’ attacks, since these

emails are not encrypted [18]. Other fallback authentication

mechanisms (e.g., social authentication [19]) have also been

evaluated though they have not been widely adopted [20], since

they are vulnerable to impersonation both by insiders and by

face-recognition tools [21]. . Figure 1. System-generated Avatar profile as defined by Micallef and Just

2011 [5] Security questions are the most widely adopted form of

fallback authentication [20] [15] since they are used by a

variety of popular organizations (e.g., Banks, E-commerce

websites, Social networks). Security questions are set-up at

account creation. Then when they want to reset their password,

users will have to recall the answers that they provided when

setting up the account. Several studies have found that security

questions have the following major limitations: (1) can be

guessed by choosing the most popular answers [3]; (2) have

memorability problems since they are not frequently used [6],

which decreases their level of usability [22]; (3) are easily

guessed by friends, family members and acquaintances [23]

[24]; (4) can be guessed by observational attacks, with a quick

Google search or by searching victims’ social networking

websites [2]. Recent studies, conducted using security

questions data collected by Google [22], found that security

questions are neither usable (low memorability) nor secure

enough to be used as the main account recovery mechanism.

This means that new techniques need to be investigated to

provide a more secure and memorable form of fallback

authentication.

In the last years, mobile devices became one of the main

mediums to access the web and people started storing (and

accessing) more sensitive information on these devices [25].

Hence, the focus of authentication research has shifted to

primarily investigate new techniques (e.g., data driven

authentication using sensors [26]) to conduct authentication on

mobile devices [27] [28]. Most of the research in this area tried

to leverage the use of the variety of inbuilt sensors (e.g.,

accelerometer, magnetometer) that are available on today’s

mobile devices, with the main goal of striking a balance

between usability and security when conducting authentication

[29] [30]. However, sensors have also been used in fallback

authentication mechanisms on smartphones [31] as a technique

that extracts autobiographical information [32] about the users’

smartphone behavior during the last couple of days. This

information is then used to answer security questions about

recent smartphone use [33]. Although these innovative security

questions techniques have managed to achieve memorability

rates of about 95% using a diverse set of questions [34] [35],

these techniques have mostly been evaluated with a younger

user-base (mean age of 26), those users that use smartphones

the most [36]. Hence, we argue that other techniques need to be

investigated to cater for those users who do not use

smartphones or use them but not very frequently (e.g. age 50+).

Besides the previously described work on autobiographical

security questions, recent research has also investigated: (1)

life-experiences passwords – which consists of several facts

about a user-chosen past experience, such as a trip, graduation,

or wedding, etc. [37]; (2) security meters – to encourage users

to improve the strength of their security answers [38] and (3)

avatar profiles – to represent system-generated data of a

fictitious person (see Figure 1), and then the Avatar’s

information is used to answer security questions [5]. Although

life-experience passwords [37] were evaluated to be stronger

then passwords and less guessable then security questions.

However, the memorability after 6 months was still about 50%.

The work on security meters for security questions [38] seems to be quite promising, however it is still at an embryonic stage

and it requires further research to evaluate its feasibility.

Using system-generated data (see Figure 1), in the form of

an avatar profile, to answer security questions [5] has also not

been extensively investigated. However, in our research we

attempt to investigate this work further because compared to

other research on security questions it seems to be the one that

has the potential to achieve the optimal balance in terms of

security and memorability due to the following reasons: (1) it

could be tailored for everyone (and not only for those users

with medium/high smartphone usage); (2) guessing attacks

could be minimized because the entropy and variety of the

answers could be defined/controlled by the system that

generates them; (3) risks of having observational attacks would

be minimal since the system-generated avatar information

would not be publicly available; and (4) memorability could be

achieved by using a gamified approach to create and nurture a

bond between users and their avatar profiles (in the form of

system-generated data as in Figure 1).

Bonneau and Schechter found that most users can

memorize passwords when using tools that support learning

over time [39]. However, we know to our cost, no-one has

attempted to use serious games to improve the users’

memorability of systems-generated answers for security

questions. Thus, in our research, we attempt to use a gamified

approach to improve users’ memorability during fallback

authentication because previous work in the security field [40]

has successfully used this approach to educate users about the

susceptibility to phishing attacks [41] with the aim of teaching

users to be less prone to these types of security vulnerabilities

[42]. Hence, this paper contributes to the field of fallback

authentication by proposing a game design which uses longterm memory and memory retrieval skills [13] to improve the

memorability of security answers based on a system-generated

avatar profile.

III. encoding associations (bond) with the avatar profile by using

the picture-based nature of this game and by adding verbal

cues. Then in section IIIB we describe how we strengthen these

encodings by having users constantly rehearse associations

(nurture the bond) through persuasive technology principles

[9].

A. Game Features

In most instances, the game functions similarly to the “4

Pics 1 Word” mobile game, meaning that the game asks

players to pick the word that relates the given pictures (e.g., for

the pictures in Figure 2a the relating word would be

“Germany”). However, at certain intervals, the game asks

players to solve avatar-based challenges. The optimal number

of times that players will be given avatar-based challenges

during a day to learn the system-generated avatar information

will be investigated in a field study. The game provides players

with a pool of 12 letters to assist them with solving the

challenge. For each given answer, players are either rewarded

or deducted points based on whether they provided the correct

or wrong answer (10 points when answering standard

challenges, 15 points when answering avatar-based recognition

challenges, 20 points when answering avatar-based recall

challenges). Points can be used to obtain hints to help in

solving more difficult challenges (deduction of 30/50 points). GAME DESIGN The main challenge in designing usable security questions

mechanisms is to create associations with answers that are

strong and to maintain them over time. In our research we use

previous findings on the understanding of long-term memory to

design a game which has the aim of improving the

memorability of system-generated answers for security

questions. Atkinson and Shiffrin [11] proposed a cognitive

memory model, in which, new information is transferred to

short-term memory through the sensory organs. The short-term

memory holds this new information as mental representations

of selected parts of the information. This information is only

passed from short-term memory to long-term memory when it

can be encoded through cue-association [11] (e.g., when we see

a cat it reminds us of our first cat). This encoding through cueassociation helps people to remember and retrieve the stored

information over an extended period of time. These encodings

are strengthened through constant rehearsals. Also, psychology

research has found that humans are better at remembering

images than textual information (known as the picture

superiority effect) [7]. In section IIIA, we describe how we use

these psychology concepts to adopt the popular “4 Pics 1

Word” mobile game for the purpose of our research. We create Figure 2. Examples of standard game challenges. Researchers in psychology have defined two main theories

to explain how humans handle recall and recognition:

Generate-recognize theory [43] and Strength theory [12].

According to the generate-recognize theory [43] recall is a two

phase process: Phase 1 – A list of possible words is formed by

looking into long-term memory; Phase 2 – The list of possible

words is evaluated to determine if the word that is being looked

for is within the list. According to this theory recognition does

not use the first phase, hence it’s easier and faster to perform.

According to strength theory [12] recall and recognition require

the same memory tasks, however recognition is easier since it

requires a lower level of strength. When it comes to avatarbased challenges, in our game we decided to use both recall and recognition challenges (see Figure 3) because having only

recognition challenges would have lowered the security level

of the game, since the answer space would have been very

small. Hence, to try and strike a balance between security and

memorability, we designed the avatar challenges part of the

game so that it starts by showing mostly recognition-based

challenges (see Figure 3b). Then as players get more

accustomed to the avatar profile and they learn the systemgenerated data (strengthening of the bond) the avatar-based

challenges would become mainly recall-based (see Figure 3a). Figure 3. Examples of recall and recognition-based avatar challenges. Psychology research [43] [44] has shown that it is difficult

to remember information spontaneously without having any

kind of memory cues. Hence, we added a feature that shows

verbal cues about each picture (see Figure 2b). This feature can

be enabled by using the points (30/50 points) that are gathered

when solving other game challenges as the player goes through

the game. We decided to add this feature, especially for the

avatar-based challenges, so that players can focus their

attention on associating the words with the corresponding cues

(pictures). We hypothesize that this should help to process and

encode the information in memory and store it in the long-term

memory [13]. players recognize the answer by associating it with the other

images that are presented with it. To improve the security

element of the game, especially when solving avatar-based

challenges, our game does not show the length of the word that

needs to be guessed. This feature makes the game more

difficult, but we argue that it increases the level of security.

B. Engagement

To nurture the bond between players and their avatars, we

will use constant rehearsals to strengthen the encodings of

associations with the system-generated data, in the players’

long-term memory. We plan to achieve this by using the

following persuasive technology principles proposed by Fogg

[9] and also used in [45]:

Tunnelling: Tunnelling is the process of providing a game

experience which contains opportunity for persuasion [9].

Players are more likely to engage in a tunnelling experience

when they can see tangible results [45]. For this reason, at the

beginning of the game, the avatar-based challenges are mostly

recognition-based rather than recall-based. We hypothesize that

in this way it is less likely that players will stop playing the

game due to being exposed to difficult challenges at the

beginning. Also, at this stage of the game obtaining hints

requires a low amount of points (30 points). Additional levels

of difficulty (recall-based challenges) become available only as

players either demonstrate sufficient skill, or play the game for

several days or weeks. As the player goes through the game the

cost (in points) of buying hints or obtain verbal cues will

increase as well (50 points).

Conditioning: According to persuasive technology

principles [9] players can be conditioned to play a game if they

are offered rewards to compensate their progress. In our game

we reward players with points when they solve challenges

correctly (more points are given when avatar-based challenges

are solved, recall-based challenges provide more points than

recognition-based challenges). The more points players collect

the more hints they can obtain when they are struggling to

solve other game challenges. We also reward players with the

following badges (see Figure 4) each time that they solve

avatar-based challenges: (1) a “smiley” badge when they solve

1 avatar challenge (see Figure 4a); (2) a “cake” badge when

they solve half of the daily avatar challenges (see Figure 4b);

(3) a “trophy” badge when they solve all daily avatar

challenges (see Figure 4c). Special sounds and visualizations

are displayed when these badges or an important milestone is

achieved (see Figure 4d).

Suggestion: Persuasive technology principles [9] suggest

that messages and notifications should be well timed in order to

be more effective. For this reason in our game we send

notifications to remind players to play the game every 24

hours, if they did not play the game during that time frame.

Also, every 24 hours we provide hints when players are stuck

with a game challenge. Figure 4. Examples of rewards and game visualizations. We decided to have a fixed set of images and always show

the same images in the same order because this helps

enhancing semantic priming [13]. Meaning that it will help Self-monitoring: Persuasive technology principles [9] state

that constantly showing progress can motivate players to

improve their performance. For this reason, in our game we

show the score and the progress in solving avatar-based

challenges each time that players play the game. We also show graphs on how many avatar-based challenges were solved

correctly during a day/week/month and how many challenges

still need to be solved to progress to the next stage. We

hypothesize that these tools will help players identify areas for

improvement and provide motivation to continue playing the

game with the aim of improving performance.

Surveillance and Social Cues: According to persuasive

technology [9], players are more encouraged to perform certain

actions if others are aware of these actions and by leveraging

social cues. In our game, we implement a social element of

surveillance by: (1) congratulating players when they return to

play the game every day; (2) applaud players when they reach

an important game milestone; (3) encourage players even when

they get incorrect answers; (4) express disappointment when

players don’t play the game regularly.

Humour, Fun and Challenges: Affect is also an important

factor to enhance players’ motivation [45]. To make the game

more fun we included emoticons when sending reminders or

when communicating with players. This is also the reason why

we selected humoristic badges (smiley, cake, trophy) to reward

players when they reach avatar-related milestones (see Figure

4). Our motivation is to keep players interested and engaged in

playing the game. IV. PROTOTYPE GAME LOGIC In our lab study we plan to evaluate a game prototype by

using the following logic. As shown in Figure 5, the game

starts by picking a random standard challenge from a pool of 7

standard challenges (all players will experience the same

standard challenges but in a random order). After completing a

standard challenge, the game player is deducted/awarded

points. Afterwards, the challenge is removed from the pool of

available challenges. At this stage the player is presented with a

randomly selected avatar-based recognition challenge (based

on the avatar profile that they selected prior to playing the

game). If the player picks the correct answer, a badge is

rewarded based on how many avatar-based challenges they

solved. The player will continue to be presented with alternate

standard and avatar-based recognition challenges until they

complete the 3 avatar-based recognition challenges. After that,

the player is prompted with alternate standard and avatar-based

recall challenges until all 3 recall avatar-based challenges are

completed. This is where the game ends. In total, each player

will complete 7 standard challenges, 3 recognition and 3 recall

avatar-based challenges.

V. CONCLUSIONS AND FUTURE WORK The proposed game design outlined in this paper teaches

and nudges users to provide stronger answers to security

questions to protect themselves against observational and

guessing attacks. Since this technique uses system-generated

data (see Figure 1), it is quite unlikely that attackers would be

able to retrieve the avatar-based answers from google

searches/social networks or through guessing attacks. We

believe that helping users to memorize the avatar’s systemgenerated data through an engaging/interactive gamified

approach can help users create and nurture a bond with their

avatar. This will be achieved by encoding information in longterm memory through constant rehearsals with the aim of

improving memorability of fallback authentication (i.e.,

security questions). In our future work, we will conduct studies

to involve users in this game design (by using the prototype

described in section IV and logic shown in Figure 5) to further

optimize the functionalities of the game and determine any

security vulnerabilities that need to be addressed. Afterwards,

we will conduct a field study to…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

Emirical study on ICT system.pdf

MIPRO 2015, 25-29 May 2015, Opatija, Croatia Empirical study on ICT system’s users’ risky

behavior and security awareness

K. Solic*, T. Velki** and T. Galba***

*J.J. Strossmayer University, Faculty of Medicine, Osijek, Croatia

Strossmayer University, Faculty of Education, Osijek, Croatia

***J.J. Strossmayer University, Faculty of Electrical Engineering, Osijek, Croatia

kresimir.solic@mefos.hr, tena.velki@gmail.com, tomislav.galba@etfos.hr

**J.J. Abstract – In this study authors gathered information on

ICT users from different areas in Croatia with different

knowledge, experience, working place, age and gender

background in order to examine today’s situation in the

Republic of Croatia (n=701) regarding ICT users’

potentially risky behavior and security awareness. To gather

all desired data validated Users’ Information Security

Awareness Questionnaire (UISAQ) was used.

Analysis outcome represent results of ICT users in Croatia

regarding 6 subareas (mean of items): Usual risky behavior

(x1=4.52), Personal computer maintenance (x2=3.18),

Borrowing access data (x3=4.74), Criticism on security in

communications (x4=3.48), Fear of losing data (x5=2.06),

Rating importance of backup (x6=4.18). In this work

comparison between users regarding demographic variables

(age, gender, professional qualification, occupation,

managing job position and institution category) is given.

Maybe the most interesting information is percentage of

questioned users that have revealed their password for

professional e-mail system (28.8%). This information should

alert security experts and security managers in enterprises,

government institutions and also schools and faculties.

Results of this study should be used to develop solutions and

induce actions aiming to increase awareness among Internet

users on information security and privacy issues. I. awareness or behavior, mostly focused on password usage

and password quality [8-13]. However, by using UISAQ

questionnaire when examining ICT users this empirical

research covers wide range of awareness, knowledge and

user’s behavior. Additional quality of empirical research

was succeeded by using UISAQ questionnaire as

statistically validated measuring instrument.

The UISAQ questionnaire has two main scales and six

subscales, each with five or six items. Associated

abbreviations are used in further text and tables: • • Potentially Risky Behavior (PRB; k=17)

o Usual Behavior (UB; k=6) o Personal Computer Maintenance (PCM; k=6) o Borrowing Accessing Data (BAD; k=5) Knowledge and Awareness (KA; k=16)

o Security in Communications (SC; k=5) o Secured Data (SD; k=5) o Backup Quality (BQ; k=6) INTRODUCTION The importance of ICT system’s users’ knowledge and

awareness about information security issues should be

acknowledged when dealing with user’s privacy and

information security in general [1-3]. Users with

potentially risky behavior can significantly affect overall

security level of different information and communication

systems [4-6]. These subscales describe user’s behavior, knowledge

and awareness. Participants should evaluate their

agreement with the statement on a 5-point Likert-type

scale where five means excellent from aspect of

information security. At the end of UISAQ questionnaire

there were two additional questions about behavioral

security of users and one part with demographic data. Generally, main goal of empirical studies is to produce

some new knowledge based on gathered data analysis. In

that manner aim of this work was to produce some new

conclusions about ICT system’s users’ knowledge,

behavior and awareness regarding information security

issues. For purpose of collecting data authors used

previously validated Users’ Information Security

Awareness Questionnaire (UISAQ) [7]. Total of 701

participants included in this study were ICT users from

different areas in Croatia with different knowledge,

experience, working place, age and gender. For statistical analysis in this work statistical software

tool MedCalc 14.12.0 was used. Statistical significance,

when comparing differences among groups was defined as

p<0.05, using nonparametric tests Mann-Whitney U Test

and Kruskal-Wallis Test with Bonferroni correction when

needed. Other empirical studies with similar aim in their

research examined only certain segments of ICT users’ II. PARTICIPANTS In this research a paper version of UISAQ

questionnaire for data collection was used. Sample was

defined in a way to be as similar as possible to general

ICT user in the Republic of Croatia restrictive to adult

users, but covering different regions (Dalmatia, Slavonia, 1356 Zagreb area), both rural and urban areas, both government

institutions and business organizations and also including

students, unemployed and retired users with different

background knowledge and experience regarding

information security issues. TABLE I.

gender Participants (n=701) in total were 32.0 ± 11.5 years

old (arithmetic mean ± standard deviation), youngest

participant was 18 and oldest was 66 years old. Among

participants there was 61.6% of female participants and

31.4% of all participants were working at private sector

meaning business organizations. Regarding professional

qualifications most of the participants were with high

education (masters) 36.8%, while there was similar

percentage of those with high school (25.2%) and

bachelor degree (24.5%). The 28.8% of all participants

revealed their password for professional e-mail systems’

access by writing it down on the questionnaire.

Analysis outcome of the whole sample represents

average results of ICT users in Croatia regarding 6

subareas (mean of items): Usual risky behavior (x1=4.52),

Personal computer maintenance (x2=3.18), Borrowing

access data (x3=4.74), Criticism on security in

communications (x4=3.48), Fear of losing data (x5=2.06),

Rating importance of backup (x6=4.18).

III. COMPARISON RESULTS In order to compare ICT users, authors made groups

regarding gender, age, workplace in government or private

sector, professional qualification, managing job position

and regarding revealing password. Following results are

more interesting part of total results, depending on

existence of statistical significant difference.

Comparing results regarding gender are showing that

female ICT users got generally better results, except in

subscale “Usual Behavior” (Table 1). There is no

statistically significant gender difference regarding

password revealing (p=0.547, Chi-Square Test).

Comparing results regarding age, where four groups

were defined, are showing that middle age and older ICT

users got generally better results (Table 2). For analysis

between each group was used Mann-Whitney U Test with

Bonferroni correction (with p<0.0125) Analysis results

have shown that youngest group of users is significantly

different of all other groups in total and most other

subscales (UISAQ, PCM, KA, SC) while significant TABLE II.

age

UISAQ

PRB

UB

PCM

BAD

KA

SC

SD

BQ 18-30 (n=166)

x±SD

3.58±0.36

4.07±0.39

3.25±0.78

4.30±0.50

4.66±0.49

3.09±0.51

3.15±0.89

2.04±0.78

4.07±0.67 GENDER DIFFERENCES IN USERS’ INFORMATION

SECURITY AWARENESS

Male (n=269)

x±SD Female (n=432)

x±SD p* UISAQ 3.68±0.37 3.71±0.32 0.237 PRB

UB

PCM

BAD 4.17±0.40

3.30±0.96

4.45±0.48

4.75±0.38 4.15±0.34

3.14±0.87

4.58±0.38

4.72±0.39 0.409

0.018

<0.001

0.050 KA 3.18±0.54 3.27±0.46 0.013 SC

SD

BQ 3.33±0.83

2.06±0.84

4.16±0.72 3.57±0.81

2.05±0.76

4.20±0.65 <0.001

0.734

0.667

* Mann-Whitney U Test difference among other three groups of users is found only

regarding subscale “Personal Computer Maintenance”.

Regarding workplace of ICT users, working in

government institutions or in private sector, results have

shown that both groups of ICT users got similar results,

except in subscale “Secured Data” where users working at

private sector got significantly lower result (p=0.015;

Mann-Whitney U Test with Bonferroni correction).

However, ICT users that work in private sector

significantly more often reveal their password (p<0.001,

Chi-Square Test), 48.2% of them.

Results of comparison between groups of ICT users

with different professional qualification have shown that

participants with masters got total result and two subscales

regarding behavior (UISAQ, PRB, UB) significantly

better than all other groups (with p<0.001; Mann-Whitney

U Test with Bonferroni correction) (Table 3). Most

significant differences (with p<0.0125; Mann-Whitney U

Test) were found between users with masters and users

with high school (UISAQ, PRB, UB and SD) while users

who attended gymnasium are more skeptical in securing

data than users with high school (SD).

Results of comparison between groups of ICT users

regarding managing job position have shown significant

difference between top management and the rest of

employees and also significant difference between

employed and unemployed users (Table 4). Statistical

analysis between each group (p<0.0125; Mann-Whitney U

Test with Bonferroni correction) has shown that AGE DIFFERENCES IN USERS’ INFORMATION SECURITY AWARENESS

31-40 (n=206)

x±SD

3.73±0.31

4.21±0.37

3.40±0.90

4.48±0.42

4.74±0.37

3.26±0.43

3.49±0.83

2.00±0.69

4.29±0.56 41-50 (n=190)

x±SD

3.74±0.35

4.18±0.34

3.16±0.93

4.61±0.33

4.78±0.29

3.30±0.55

3.66±0.74

2.11±0.92

4.13±0.80 51-66 (n=139)

x±SD

3.73±0.32

4.14±0.33

2.90±0.95

4.76±0.27

4.77±0.38

3.31±0.46

3.62±0.74

2.07±0.75

4.24±0.65 p*

<0.001

0.028

0.105

<0.001

<0.001

<0.001

<0.001

0.832

0.009

* Kruskal Wallis Test 1357 TABLE III. DIFFERENCES IN SUBSCALES OF UISAQ REGARDING PROFESSIONAL QUALIFICATION OF USERS professional

qualification High school (n=177)

x±SD Gymnasium (n=77)

x±SD Bachelor (n=172)

x±SD Masters (n=258)

x±SD p* UISAQ

PRB

UB

PCM

BAD

KA

SC

SD

BQ 3.64±0.36

4.13±0.38

3.05±1.02

4.56±0.48

4.77±0.38

3.15±0.49

3.49±0.83

1.93±0.81

4.03±0.85 3.65±0.40

4.07±0.40

3.00±0.90

4.49±0.55

4.70±0.43

3.24±0.60

3.39±0.97

2.17±0.78

4.15±0.73 3.66±0.30

4.08±0.35

3.03±0.80

4.51±0.42

4.70±0.42

3.24±0.46

3.49±0.83

1.99±0.76

4.22±0.59 3.77±0.32

4.24±0.34

3.46±0.83

4.53±0.35

4.75±0.36

3.30±0.48

3.49±0.78

2.13±0.81

4.27±0.58 <0.001

<0.001

<0.001

0.103

0.211

0.090

0.976

0.005

0.088

* Kruskal Wallis Test unemployed users are significantly different from all other

groups in total and in several other subscales (UISAQ,

PCM, KA, SC, BQ). Significant difference was also found

between top and middle management regarding

“Borrowing Accessing Data”, while there was no

difference found between each management group and

employed users. the subscales; Comparing results between ICT users that did or did

not reveal password for accessing professional e-mail

system are shown in last table (Table 5). ICT users that

revealed their password got significantly lower overall

result and results for three subscales that examine

“Potentially Risky Behavior” (UB, PCM and BAD). Also

there is significant difference regarding age, where

younger ICT users significantly more often reveal their

password.

However, ICT users with lower level of education

significantly more often reveal their password (p<0.001,

Fisher’s Exact Test).

IV. • Female users are generally more careful and more

skeptical comparing to their male colleagues; • Regarding age difference, middle age and older

ICT users got better results in total and in most of job

position

UISAQ

PRB

UB

PCM

BAD

KA

SC

SD

BQ ICT users that work in private sector significantly

more often reveal their password; • Comparison of users with different professional

qualification has shown that participants with

masters got overall result significantly better than

other users. Most significant differences were

found between users with masters and users with

high school; • Unemployed users got significantly lower results

than all other groups in total and in several

subscales, both regarding behavior and awareness.

Significant difference between three groups of

employed users was only regarding borrowing

access data; • Participants who did not reveal their password

generally got better results than participants that

did reveal their password. CONCLUSION Some general conclusions about ICT user’s behavior

and awareness emerging from analysis results are: TABLE IV. • Regarding gender, age and different professional

qualification results are expected. However, top

management participants achieved surprisingly well

results, which is very important as that kind of ICT users

is most often target in direct phishing hacker attacks.

Maybe the most interesting information is percentage

of users that have revealed their password for professional

e-mail system (28.8%) and they are working in private DIFFERENCES IN SUBSCALES OF UISAQ REGARDING MANAGING JOB POSITION OF USERS Top management

(n=24)

x±SD

3.85±0.35

4.26±0.31

3.47±0.79

4.43±0.37

4.89±0.16

3.44±0.56

3.63±0.86

2.25±0.75

4.46±0.43 Middle management

(n=126)

x±SD

3.72±0.32

4.15±0.36

3.17±0.93

4.58±0.34

4.71±0.36

3.28±0.44

3.65±0.73

2.02±0.65

4.18±0.65 Employee (n=495)

x±SD

3.70±0.34

4.15±0.37

3.18±0.90

4.55±0.43

4.73±0.41

3.25±0.49

3.50±0.82

2.06±0.82

4.20±0.69 Unemployed

(n=55)

x±SD

3.54±0.35

4.12±0.31

3.37±0.92

4.20±0.48

4.77±0.26

2.96±0.53

2.87±0.81

2.03±0.87

3.98±0.71 p* 0.001

0.259

0.151

<0.001

0.020

<0.001

<0.001

0.470

0.012

* Kruskal Wallis Test 1358 [2]

TABLE V. DIFFERENCES IN SUBSCALES OF UISAQ

REGARDING USERS’ PASSWORD REVEALING password

revealed

UISAQ

PRB

UB

PCM

BAD

KA

SC

SD

BQ

Age No (n=499)

x±SD

3.72±0.35

4.19±0.36

3.25±0.92

4.56±0.40

4.77±0.34

3.25±0.51

3.51±0.83

2.04±0.79

4.20±0.68

40.34±11.42 Revealed (n=202)

x±SD

3.64±0.31

4.06±0.37

3.07±0.86

4.46±0.49

4.65±0.48

3.21±0.46

3.41±0.83

2.08±0.80

4.14±0.68

37.92±11.49 [3] p*

0.003

<0.001

0.008

0.029

0.001

0.530

0.259

0.461

0.134

0.009 [4] [5] [6] [7] * Mann-Whitney U Test sector significantly more often. This information should

alert security experts and security managers in companies,

government institutions and also schools and faculties.

There are few possible drawbacks of this study. It was

not possible for authors to check out if the revealed

password is true and active, and also there would be

higher amount of revealed passwords if some employees

in some departments were not warned in advance. Other

recommendations for future studies would be additional

questions in demographic section of UISAQ questionnaire

and bigger sample size.

Results of this study should be used to develop

solutions and induce actions aiming to increase awareness

among Internet users on information security and privacy

issues. [8]

[9] [10] [11] [12] [13] REFERENCES

[1] A. Tsohou, S. Kokolakis, M. Karyda and E. Kiountouzis,

“Process-variance models in information security awareness

research”, Information Management & Computer Security, vol.

16, pp. 271-287, July 2008. 1359 S. Williams and S. Akanmu, “Relationship Between Information

Security Awareness and Information Security Threats”, IJRCM,

vol.3, pp. 115-119, August 2013.

P. Tasevski, “Methodological approach to security awareness”,

CyberSecurity for the Next Generation. (Politechnico di Milano,

Italy), 11-12 December 2013.

P. Puhakainen and M. Siponen, “Improving Employees’

Compliance Through Information Systems Security Training: An

Action Research Study”, MIS Quarterly, vol. 34, pp. 757-778,

December 2010.

K. Solic, D. Sebo, F. Jovic and V. Ilakovac, “Possible Decrease of

Spam in the Email Communication”, Proceedings IEEE MIPRO,

(Opatia), pp. 170-173, May 2011.

K. Beckers, L. Krautsevich, and A. Yautsiukhin, „Analysis of

Social Engineering Threats with Attack Graphs“, Proceedings of

the 3rd International QASA – Affiliated workshop with ESORICS,

(Wroclow, Poland), September 2014.

T. Velki, K. Solic and H. Ocevcic, “Development of Users’

Information Security Awareness Questionaire (UISAQ) – Ongoing

Work”, Proceedings IEEE MIPRO, (Opatia), pp. 1417-1421, May

2014.

A. Keszthelyi, “About Passwords”, Acta Polytechnica Hungarica,

vol.10, pp. 99-118, September 2013.

K. Solic, H. Ocevcic and D. Blazevic, “Survey on Password

Quality and Confidentiality“”, Automatika, vol. 56, April 2015.

(accepted for publication).

A.G. Voyiatzis, C.A. Fidas, D.N. Serpanos and N.M. Avouris,

“An Empirical Study on the Web Password Strength in Greece”,

15th Panhellenic Conference on Informatics, (Kastonija Greece),

pp. 212-216, September-October 2011.

M. Dell’Amico, P. Michiardi and Y. Roudier, “Password Strength:

An Empirical Analysis”, Proceedings IEEE INFOCOM, (San

Diego, CA) pp. 1-9, March 2010.

Ma Wanli, J. Campbell, D. Tran and D. Kleeman, “Password

Entropy and Password Quality”, 4th International Conference on

Network and System Security, (Melbourne, VIC), pp. 583-587,

1-3, September 2010.

P.G. Kelley, S. Komanduri, M.L. Mazurek, R. Shay, T. Vidas, L.

Bauer, N. Christin, L.F. Cranor and J. Lopez, “Guess Again (and

Again and Again): Measuring Password Strength by Simulating

Password-Cracking Algorithms”, IEEE Symposium on Security

and Privacy, (San Francisco, CA), pp. 523-537, May 2012. Read more

ATTACHMENT PREVIEW

Download attachment

Improving user security behaviour.pdf

cose 2208.qxd 08/12/2003 15:56 Page 685 Improving user security

behaviour

Many organisations suspect that their internal

security threat is more pressing than their

external security threat. The internal threat

is predominantly the result of poor user

security behaviour. Yet, despite that, security

awareness programmes often seem more likely

to put users to sleep than to improve their

behaviour. This article discusses the

influences that affect a user’s security

behaviour and outlines how a well structured

approach focused on improving behaviour

could be an excellent way to take security

slack out of an organisation and to achieve a

high return for a modest, low-risk investment. A. Introduction

All modern organisations have to rely on the

sensible behaviour of their staff every day and in

every operational task that their staff perform. No

matter how good an organisation’s security

policies and standards, security documentation

simply cannot spell out unambiguously how staff

should act in each situation they might encounter.

Organisations cannot avoid to have to rely on

their staff to make sensible security decisions for

each task — no matter how small — that has

any security or control element to it. could be the result not of poor security solutions

but of poor security behaviour by staff. Hence, a

well-focused security programme targeted at

improving user security behaviour could

significantly reduce the size of the securityrelated overhead.

In this article we look at six factors that have a

strong influence on people’s security behaviour.

We then point to the three key factors where

an organisation can take clear steps to improve

its staff behaviour and, thereby, significantly

reduce the internal security threat and the level

of security incidents experienced. Computers & Security Vol 22, No 8 John Leach Information

Security

Tel: +44 1264 332 477

Fax: +44 7734 311 567

Email: John.Leach@John

LeachIS.com B. The internal security threat

The internal security threat is a threat area

encompassing a broad range of events, incidents

and attacks, all connected by being caused not

by external people who have no right to be

using the corporate IT facilities but by the

company’s own staff, its authorised IT users.

This threat area covers user errors and

omissions. It also covers user negligence and

deliberate acts against the company. It

encompasses behaviours such as:

a lack of security common sense1 — users

doing things that all users should know

better than to do, e.g. double-clicking on an

odd-looking .exe file that comes in by email

or sharing their password with colleagues; Whether diligently checking a transaction

before it is released, being careful what they say

over the telephone to an external caller,

selecting a non-trivial password, or thinking

twice before opening an unexpected and out-ofcontext email attachment, staff are continually

having to make day-to-day security decisions. If

just one hundredth of these decisions were

made wrongly, a large organisation would be

carrying a huge weight of daily security errors,

causing a mammoth operational overhead.

A recent study by the ISF (‘Information

Security Culture’, The Information Security

Forum, November 2000) and parallel studies of

safety failures in high-hazard environments

(referenced in the above ISF report) suggest

that as many as 80% of major security failures Dr John Leach users forgetting to apply security procedures,

e.g. peripatetic staff failing to take back-ups

of their desktop data or support staff

resetting a user’s password on the strength

of an incoming telephone call;

users taking inappropriate risks because they

did not appreciate or believe the level of

risk involved, e.g. leaving the PC

unattended in an open office without

logging off;

1 The Oxford English Dictionary defines common sense as

‘sound practical sense especially in everyday matters’. By

extension, security common sense is sound practical sense

in everyday security matters. 0167-4048/03 ©2003 Elsevier Ltd. All rights reserved. 685 cose 2208.qxd 08/12/2003 15:56 Page 686 John Leach

Improving user security behaviour deliberate acts of negligence — users

knowingly failing to follow essential security

processes, e.g. emailing a highly sensitive

document outside the company without any

protection or support staff failing to keep

infrastructure patched simply because it is

‘too difficult’;

deliberate attacks — users purposefully

acting against the company’s interests,

perhaps because they feel angry with their

employer, e.g. disclosing a clearly restricted

and highly sensitive report to the

competition or disclosing significant

security vulnerabilities to an outside

bulletin board.

Poor or unacceptable user behaviour is a

significant, perhaps even major, determinant of

the level of security incidents suffered by a

company. User behaviour can be improved

through a variety of interlocking techniques

which, together, work to create a strong

security culture and to strengthen the way the

security culture influences the behaviour of

individual users. As the internal threat is

possibly the largest source of an organisation’s

security pain, there is potentially a huge value

to be gained from understanding how this

could be done. C. The factors that influence

security behaviour

To manage down the internal security threat, we

need to understand how a company’s culture and

practices can affect people’s behaviour.

The influential factors fall into two groups, as

illustrated in Figure 1. The first group,

encompassing the user’s understanding of what

behaviours the company expects of them, is

distinct from the second group, factors which

influence the user’s personal willingness to

constrain their behaviour to stay within

accepted and approved norms.

The user’s understanding of which behaviours

are expected of them — shown in the top half

of the diagram — are formed from:

what they are told;

what they see being practiced by others

around them;

their experience built from decisions they

have made in the past.

We’ll look at each of these factors in turn. Figure 1: The factors that influence user security behaviours. C1.1 What employees are told

Most organisations have a security manual that

comprises the company’s formal statement of its

position on security. This lays out its security

policies, practices, standards and procedures. It

might include an explicit statement of the

company’s security values and principles, though it

is more likely that the values and principles will

be articulated only implicitly through the policies

and standards laid down. This documentation can

be called the company’s body of knowledge. 686 cose 2208.qxd 08/12/2003 15:56 Page 687 John Leach

Improving user security behaviour The body of knowledge’s effectiveness at

conveying what constitutes approved security

behaviours varies according to:

its accessibility;

the completeness of its coverage;

the clarity of the stated security values;

the uniformity of its security values. whether the company demonstrates that

good security is important through having

systems to monitor security behaviour,

reward good behaviour, and respond to bad

behaviour.

When there are numerous inconsistencies

between the formal statements in the body of

knowledge and what the person observes in

practice around them, people will be guided more

by what they see than by what they are told. C1.3 The user’s security common

sense and decision making skills C1.2 What employees see in practice

around them

Whether they are new staff trying to understand

how to behave within their new company or

existing staff more subliminally conforming to the

norms of their work environment, people are very

strongly influenced by the behaviour of their

peers. They build their security attitudes and set

their own security behaviour according to:

the values and attitudes demonstrated in

the behaviour of senior management;

the consistency between the company’s

stated values and the evident behaviour of

their peers and colleagues;

whether other company practices (e.g. its

human resources practices, its press relations

practices) reflect its security values; The body of knowledge cannot hope to spell out

the correct security decision for every situation

that the user might encounter. It should, at a

minimum, cover those situations where following

a particular procedure correctly is crucial. It

cannot grow to encompass every situation; it must

avoid becoming so extensive that the atoms of

information buried within it become inaccessible

to well-intentioned but fully stretched users.

Hence staff cannot avoid having to make their

own security decisions as part of their daily tasks.

Staff make most of their security decisions in

non-critical situations where moderate deviation

from the ideal decision can be tolerated. Some

decisions will be made in critical or sensitive

situations where the user has to make an instant

decision about what to do without any reference

to written guidance. Over a period of time, each

person builds up their own personal history of

security decisions that they have made. They will

remember these as either good decisions or bad

decisions according to the feedback, if any, that

they received. In the absence of criticism, a

decision will be adopted as an acceptable course

of action, available to be repeated until a better

course of action presents itself. In this way, users

build their own personal and private body of

knowledge to supplement the shared corporate

body of knowledge.

These three factors combine to create the user’s

understanding of the accepted and approved 687 cose 2208.qxd 08/12/2003 15:56 Page 688 John Leach

Improving user security behaviour behavioural norms at work. We now need to look

at the factors that influence the user’s personal

willingness to constrain their behaviour to stay

within those norms. Their willingness to conform

is affected by: and will either modify their principles or leave

the company. Hence this tension is self-resolving

and rarely leads to problems. There is little an

organisation can do to address this situation, so

we will not discuss it further here. • their personal values and standards of

conduct; C2.2 The user’s sense of obligation • their sense of obligation towards their

employer;

• the degree of difficulty they experience in

complying with the company’s procedures.

We will now look at each of these in turn. Employees feel a psychological pressure to

behave according to company expectations and

to constrain their behaviour voluntarily to stay

within the bounds of accepted practice. A large

part of this pressure comes from what is called

the ‘psychological contract’ between employee

and employer. For some this pressure is stronger

than for others.

Each employee has a psychological contract

with their employer, i.e. an unwritten reciprocal

agreement to act in each other’s interest. The

employee agrees to work diligently at their job

and to conform to the company’s behavioural

expectations in return for the company treating

them well. C2.1 The user’s personal values and

standards of conduct

Most employees ascribe a high value to

principles and believe in the importance of

shared values and sensible rules. These

employees can be expected to take up and apply

the company’s system of values and standards,

feeling more comfortable working amongst

others to an agreed set of rules than working to

their own proprietary rules or with no rules. Tensions can arise when there is conflict between

the individual’s values and the company’s values.

Most people will not sustain that tension for long, 688 It is in the nature of a contract that each party

honours the contract to the degree that they

perceive the other party to be honouring it.

Hence, if a member of staff feels that they are

well treated, recognised and rewarded, then

they will gladly respond in kind and act in the

company’s best interest. If they feel that they

have been treated unfairly by their employer in

any area of their employment relationship, then

they will feel that the bonds have been

loosened and will not feel as obligated to act in

the company’s best interests. Indeed, if the

person feels that the company has done them

wrong, they could feel angry and compelled to

punish the company. That is when a company’s

users become its security enemies and can

become the source of major security threats.

Companies recognise that the rewards of work

vary from individual to individual. For some

people, work is largely about being in a social

environment with others. For some, work is

about earning a salary to pay the mortgage and cose 2208.qxd 08/12/2003 15:56 Page 689 John Leach

Improving user security behaviour they do nothing to improve staff attitudes

towards security. D. The keys to better user

security behaviour to buy the toys. For others, it might be about

getting good training and experience as they

move quickly on their way to other positions in

other companies.

Whatever their reasons for working, people will

feel varying degrees of satisfaction and reward

from being at work. Their level of satisfaction

will determine the strength of their

psychological contract with their employer.

The strength of their psychological contract

will determine the degree to which they

constrain their behaviour to conform to

approved and acceptable company norms. C2.3 The difficulty in complying

The third component is whether the company

makes it easy for their staff to comply with its

standards and procedures, and whether there are

temptations of personal gain seducing people

not to comply.

If security controls are difficult to perform or

are operationally burdensome, if they are of

little obvious benefit or do not effectively

prevent people exploiting opportunities for

personal gain, then users will have a natural

incentive to circumvent the controls. Even

when staff recognise that security controls are

implemented for good reasons, they have very

little tolerance for controls that are neither

effective, nor efficient, nor clear. The

knowledge that their behaviour is being

monitored and their compliance measured, and

the weight of any penalties used to discourage

non-compliance, will have some limited effect

on how far staff are prepared to let their

behaviour stray from mandated norms, but There are six influential factors affecting how

users behave. Clearly, a company can expect to

influence some, but not all, of these. A

company cannot expect, for example, to have

much influence over its staff’s personal values

and standards of conduct or their intrinsic belief

in the benefit of following rules.

Companies can manage down their internal

security threat best by focusing primarily on

those factors that are realistically within their

control. They need to get the most leverage

they can out of the factors they can influence,

for they cannot presume that all staff will bring

to their work high personal standards and a

natural faith in the value of following rules.

Three of the above six factors are key to

improving security behaviour and driving down

the impact of the internal security threat. We

will focus on these three, discussing them in just

a moment in sections below. The other three,

lesser factors, we can deal with quickly here. As we have just seen, a company cannot expect

to have much influence on its staff’s personal

values and standards of conduct or their intrinsic

belief in the benefit of following rules. The best

course of action is, in a fair way, to divert contraindicated staff away from roles where the

company is most exposed to any shortfall in the

standard of its staff’s behaviour. 689 cose 2208.qxd 08/12/2003 15:56 Page 690 John Leach

Improving user security behaviour The company should make continual efforts to

ensure that its body of knowledge is readily

accessible to all its staff. It should recognise that

different staff will need to receive different

messages and receive those messages through

different channels. Building a strong body of

knowledge is not a trivial task. However, it is

well covered in the literature at large and we do

not need to discuss it further here.

The company should make continuous efforts to

ensure that its security controls are efficient,

effective, and properly positioned. This is a

labour of continuous improvement. However, it

is also obvious and we do not need to discuss it

further here.

The three factors that are key to improving user

security behaviour are:

The behaviour demonstrated by senior

management and colleagues.

The user’s security common sense and decisionmaking skills.

The strength of the user’s psychological

contract with the company.

We shall look at each of these in turn. D1.1 The behaviour demonstrated by

others

What people see in practice around them

influences their attitudes and behaviour more

powerfully than what they are told. The company’s

body of knowledge will be undermined if its stated

principles, policies and procedures are contradicted

by the practices that people see in evidence

around them. What people are shown needs to

support rather than contradict what they are told.

If a company wants its users to practice correct

security, it needs to back up this desire with

systems to ensure that its principles and policies

are followed. If a few bad security practices are

allowed to establish themselves, then all security

practices are weakened in the eyes of staff.

Ensure that all senior management as well as 690 junior staff have good security behaviour. Make a

point of providing feedback to staff on the

correctness of their behaviour, and of gathering

input from staff on where the body of knowledge

is being undermined by contrary messages in the

company’s pronouncements or contrary practices

in its systems. Reward staff for good security

behaviour, and require additional training or take

other appropriate steps for staff that demonstrate

unacceptable behaviour. D1.2 The user’s security

common sense and decisionmaking skills

A user’s own security decisions, once made,

become a part of the user’s personal body of

knowledge and carry forward into their future

security decisions. Therefore, a company has a

clear requirement to help its users to develop

good security common sense so that they can

make simple and straightforward security

decisions reliably and correctly themselves.

Otherwise it will not escape suffering a high

and persistent background level of security

worries, such as the familiar mistakes of people

forgetting to change default passwords on newly

installed equipment or using their own remote

dial-in facilities to avoid having to use the

corporate managed gateway.

Common sense is about having a realistic

practical understanding of how things work in

the real world and being able to make good

practical decisions unguided. Deciding whether

or not to believe what one hears, deciding how

to follow an unclear instruction, and making

tough decisions in complex situations all require

sound common sense. Common sense is

something that everyone recognises when they

see it. It is a decision-making skill, not simply

an accumulation of knowledge.

Security common sense is something that can be

taught. Teach the user the principles that they

need in order to guide their decision making, but

keep the number of examples down to those few cose 2208.qxd 08/12/2003 15:56 Page 691 John Leach

Improving user security behaviour that are needed to illustrate the principles. Avoid

providing too many examples, which will take

decision making away from the user and put it

back in the body of knowledge. You will leave

the user with weaker, not stronger, decisionmaking skills. This is where many security

awareness and education courses go wrong.

Focus on developing the users’ security

decision-making skills. Thereafter, provide

people with continual feedback and support.

Give them credit when they do something well,

and let them know when they err, indicating a

better decision that they could have made.

Periodically refresh them with widely applicable

examples so that users can continually re-centre

their decision-making framework and prevent it

wandering off-centre over time. D1.3 The user’s psychological

contract with their employer

If a company ensures that its overt behaviour

supports rather than contradicts its body of

knowledge, and it helps staff develop and

strengthen their security common sense, it will

reduce the number and severity of user security

errors. It will also want to reduce the willful

component of the internal security threat: user

security negligence and deliberate attacks by

the user. This is addressed by ensuring that users

feel strongly bound by their psychological

contracts with the company.

We return to the observation made above that it

is in the nature of a contract that people will

honour their psychological contract to the degree

that they perceive the company to be honouring

its part of the contract. Hence, a company can

bind its users to its code of good security conduct

by showing that it is bound to the code itself.

Earlier in our discussion, the issue was one of

ensuring that practice on the ground was not

allowed to contradict the body of knowledge.

Here the issue is to ensure that the company is

seen to be boldly taking security seriously rather

than timidly keeping its security efforts hidden from view. This issue is, of course, closely

interwoven with the earlier issue, and both

aspects contribute to the creation of a strong

security culture. The creation of a strong security

culture is the best way to motivate staff to

behave consistently in a security-conscious way.

Look for guidance from the practices of

companies with strong safety cultures. In

companies working within high-hazard

industries, one would expect to see safety

discussed regularly by senior management, both

in board and strategy meetings and in

communications with staff. Safety issues would

be reported on regularly and openly, and

shortcomings would be treated as serious issues

warranting urgent management attention.

Safety mandates carry conviction, and staff are

consistently safety-conscious.

For a company to strengthen its security culture,

it should expect to follow similar practices. Be

seen to be discussing security issues at senior

management levels and make security a topic of

regular communication with staff. Report on

security issues openly within the company. Deal

with serious shortcomings under senior

management direction. Show clearly that

security is an important part of how senior

management runs the business. Then the

corporate security mandates will carry

conviction, employees will be consistently

security-conscious, and staff will align their

behaviour to the corporate security mandates.

The converse is too familiar. If security does not

feature in discussions or communications, and

the company’s senior management acts

inconsistently from issue to issue, staff will

perceive the company to have a weak security

culture and will not consider themselves dutybound to follow company mandates. They will

not expect to do any more themselves than they

see other people do, even if it falls well short of

the written policies. If staff feel their corporate

superiors do not demonstrate that honouring

corporate values and principles is important, they 691 cose 2208.qxd 08/12/2003 15:56 Page 692 John Leach

Improving user security behaviour will not make any effort to abide by the rules

themselves, other than by default.

It is a simple matter of leadership. Strong

leadership creates a strong culture, and a strong

culture gives clear direction to staff at all levels.

This helps to illustrate why honour and strong

leadership are so important in the fighting

forces, where men and women might be called

on to push themselves to their limits and to put

themselves in positions of personal danger.

Interestingly, this also illustrates why companies

with a weak corporate culture find culture

change so difficult, whereas one might at first

have expected that they, of all companies,

would find culture change relatively easy. E. Conclusion

A company’s primary objective in influencing

its users’ security behaviour is to drive down the

level and severity of the security incidents that

it experiences. Poor user security behaviour is a

significant, perhaps even major, determinant of

the level of security incidents that a company

Figure 9. The ways to improve user security behaviour. 692 suffers. Hence, companies have a ready

opportunity to make significant security gains

by having a strong security culture and by

strengthening the influence that the culture

exerts on individual users.

Of the various influential factors, we have

focused on three that are key. A company can

maximise its leverage from these three if it:

makes sure that the behaviour of senior

management and the company’s systems support

rather than contradict the body of knowledge;

strengthens the users’ security common sense

and trains staff to dev…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

litreviewassignment.docx

INTRODUCTION TO ICT RESEARCH METHODS

Literature Review Assignment

A literature review is an account of what has been published on a topic (in journal articles, conference

proceedings, books and other relevant sources) by recognised researchers. Understanding the prior

research on a particular topic is the basis of most new research. Researchers must know what has been

studied, discussed and recommended by others in related areas.

This assignment is intended to: Provide you with practise finding peer reviewed, recent articles about one topic Allow you to organise your ideas about that topic in a meaningful way (including both synthesis and

critical analysis).

In this assignment you will review the published literature on one of the following topics and write a

literature review that synthesizes what you have found into a summary of what is, and is not, known on

the topic. You should use the topic as a starting point and choose a focussed subset of the topic. User security behaviour – Information technology security is an increasingly important research topic.

Although organisations spend large amounts on technology that can help safeguard the security of their

information and computing assets, increased attention is being focused on the role people play in

maintaining a safe computing environment. This issue is important both for organisational and home

computing.

Affective computing – Affective computing is “computing that relates to, arises from, or deliberately

influences emotions”1. Developments in affective computing facilitate more intuitive, natural computer

interfaces by enabling the communication of the user’s emotional state. Despite rapid growth in recent

years, affective computing is still an under-explored field, which holds promise to be a valuable direction

for future software development. 1 Picard, R. W. (1997). Affective computing. Massachusetts: MIT Press. 1 Interactive game playing and network quality – Understanding the impact of network conditions on

online game player satisfaction has been a major concern of network game designers and network

engineers, and has received research attention from both angles. For example, a number of studies

have sought to evaluate the effect of aspects of network quality, such as network delay and network

loss, on online gamers’ satisfaction and behaviour.

The effectiveness of e-learning – Education is increasingly supported by ICT, with the term e-learning

being used as a general term to refer to many forms of technology supported learning. Much of the elearning research has had a technology focus (e.g. descriptions of implementations) or has been limited

to studies of adoption (i.e. will people use it?), but there has been less research on the impact of elearning on outcomes for students.

Mobile analytics– The term ‘big data’ refers to data sets that are large and complex and hence require

new approaches to deal with them. Data analytics has become increasingly important to business and

much research has been undertaken into how big data can be used to help organisations make

decisions. Mobile analytics is a growing area of focus for data scientists. To do To successfully complete the assignment, you must begin searching for relevant literature immediately. The

skills you obtained in your Transition or Foundation unit and have practised in tutorials for BSC203 will be

invaluable.

Find at least 10 articles related to your chosen topic. To qualify as a source of information that you can use

for the assignment, these main articles must report results of research studies (i.e. not just authors’

opinions). Each article must also: Have been published in a refereed journal or conference proceedings (though you may obtain the

article through an online source) Have an extensive references section.

In addition you may choose to supplement these articles with a few articles from other sources or that do

not present the authors’ own results.

After reading each article, you should think about how they all fit together. Your review should be organised

by concepts, such as findings, rather than by sources of information. Do not proceed through the articles

one-by-one. Your literature review should include an introduction, a main body that reviews the literature

(and which you should subdivide further), and a conclusion. Format guidelines Give your literature review a title that clearly reflects the content of your review.

Include an introduction section that states the purpose of the review and a conclusion section. Include

other sub-sections to help structure your work.

Use 12-point font.

Your review should be approximately 1500 words in length.

Include appropriate citations throughout the review and a list of references at the end. Referencing

should be in APA or IEEE style.

Your review should include a minimum of 10 sources of information. 2 MARKING SCHEDULE

Component Marks Structure 10 Does the introduction describe the purpose of the literature

review?

Does the body present information in an organised and

logical manner?

Is there an effective conclusion that summarises the main

points discussed? Content and Research: Does the title reflect the contents of the literature review? 60 Is there evidence of adequate understanding of the literature

included?

Is the organisation/grouping of the literature effective with the

main points clearly related to the purpose of the review?

Are the main points supported by evidence (are not just your

opinions)?

Is the material well synthesised? Use of Sources 20 Are at least 10 references cited?

Are mainly academic sources (e.g. journal articles and

conference papers) used?

Is it correctly referenced in APA or IEEE style (‘in-text’

referencing and reference list)?

Is it in your own words? Presentation 10 Fluent (correct grammar, spell-checked and correctly

punctuated)?

Correctly structured (paragraphing, topic sentences and flow

of ideas)?

Have section headings been used to help structure the main

text? TOTAL 100 3 Read more

ATTACHMENT PREVIEW

Download attachment

Personalityattitudesand intentions.pdf

c o m p u t e r s & s e c u r i t y 4 9 ( 2 0 1 5 ) 1 7 7 e1 9 1 Available online at www.sciencedirect.com ScienceDirect

journal homepage: www.elsevier.com/locate/cose Personality, attitudes, and intentions:

Predicting initial adoption of information security

behavior

Jordan Shropshire a, Merrill Warkentin b,*, Shwadhin Sharma b

a

b University of South Alabama, School of Computing, 150 Jaguar Drive, Mobile, AL 36688-7274, USA

Mississippi State University, College of Business, P.O. Box 9581, Mississippi State, MS 39762-9581, USA article info abstract Article history: Investigations of computer user behavior become especially important when behaviors like Received 23 July 2014 security software adoption affect organizational information resource security, but adop- Received in revised form tion antecedents remain elusive. Technology adoption studies typically predict behavioral 22 September 2014 outcomes by investigating the relationship between attitudes and intentions, though Accepted 3 January 2015 intention may not be the best predictor of actual behavior. Personality constructs have Available online 12 January 2015 recently been found to explain even more variance in behavior, thus providing insights into

user behavior. This research incorporates conscientiousness and agreeableness into a Keywords: conceptual model of security software use. Attitudinal constructs perceived ease of use Attitudes and perceived usefulness were linked with behavioral intent, while the relationship be- Intention tween intent and actual use was found to be moderated by conscientiousness and agree- Personality ableness. The results that the moderating effect of personality greatly increases the Information security behavior amount of variance explained in actual use. Conscientiousness © 2015 Elsevier Ltd. All rights reserved. Agreeableness 1. Introduction Why do some well-meaning computer users practice safe

computing habits, while others do not, despite the intentions

to do so? As early as the 12th Century, Saint Bernard of

Clairvaux noted that good intentions do not always lead to

positive actions (basis for the adage that “the road to hell is

paved with good intentions”). It is common for individual

computer users, despite knowing that their individual information resources are at risk, to fail to act on their intentions to

practice safe computing behavior. (Safe behaviors include

frequently changing passwords, archiving important data, * Corresponding author. Tel.: þ1 662 325 1955; fax: þ1 662 325 8651.

E-mail address: m.warkentin@msstate.edu (M. Warkentin).

http://dx.doi.org/10.1016/j.cose.2015.01.002

0167-4048/© 2015 Elsevier Ltd. All rights reserved. scanning for malware, avoiding opening suspect emails, etc.)

It is imperative that employees and others follow the intent to

adopt secure technologies (such as anti-virus and antispyware software) with actual usage behavior (Furnell et al.,

2007), but such follow-through is not universal. People

within organizations, despite having the intention to comply

with information security policies, are still considered to be

the weakest link in defense against the existing information

security as their actual security behavior may differ from the

intended behavior (Han et al., 2008; Guo et al., 2011; Capelli

et al., 2006; Vroom and Solms, 2004). These “trusted agents”

inside the firewall may have the intention to comply with the

organization’s policy. However, there is a good probability that 178 c o m p u t e r s & s e c u r i t y 4 9 ( 2 0 1 5 ) 1 7 7 e1 9 1 they engage in risky behaviors of violating the integrity and

privacy of sensitive information through non-malicious accidental actions such as passive noncompliance with security

policies, laziness, or lack of motivation (Warkentin and

Willision, 2009; Rhee et al., 2009). It is a common observation

that people often fail to act in accordance with their behavioral intention (Ajzen et al., 2004). This is one of the reasons

why the “internal threat” is often cited as the greatest threat to

organizational information security (Capelli et al., 2006)

despite employees usually having the intention to comply

with information security policies.

However, the issue of intention leading to actual use has

been uncritically accepted in Social Science research and information systems (IS) research (Bagozzi, 2007). Venkatesh

et al. (2003, p. 427) stated that “role of intention as predictor

of behavior…. has been well established.” Ajzen and Fishbein

(1980, p. 41) stated that “intention is the immediate determinant of behavior.” The primary focus of the previous research

has been on the formation of behavioral intention to measure

the actual information technology (IT) behaviors almost to the

exclusion of other factors that would affect the actual

behavior of the respondent (Limayem et al., 2007). Many IS

researchers have used behavioral intention to measure actual

behavior of users (for example, Ifinedo, 2012; Johnston and

Warkentin, 2010; Herath and Rao, 2009; Sharma and

Crossler, 2014; Warkentin et al., 2012; Dinev and Hu, 2007).

In the context of protective behaviors (such as wearing seat

belts, eating healthy diets, smoking cessation, etc.), it is

evident that a great percentage of individuals have the intent

to act in safe ways, but only some of these individuals will act

on this intent. Empirical support for the relationship between

user intentions and actual behavior is weak (Bagozzi, 2007),

indicating that there may be other factors that explain why

certain individuals may not act on their intentions and follow

through with appropriate behaviors. Studies suggest that

measuring intention rather than actual behaviors can be

troublesome as intention doesn’t always lead to behaviors

(Crossler et al., 2013; Anderson and Agarwal, 2010; Mahmood

et al., 2010; Straub, 2009). This gap between intention and

behavior could be attributed to differences in cognitions or

other unknown variables (Amireault et al., 2008) and to the fact

that intentions are usually under cognitive control (Gollwitzer,

1996), whereas actual choices are often made rather impulsively and even unconsciously (Willison and Warkentin, 2013;

Wansink and Sobal, 2007). Fishbein and Ajzen (1975) used a

normative concept to explain the intention-behavior discrepancy while past behavior or habit have also been used as a

moderating variable to explain this discrepancy (Limayem

et al., 2007; Oullette and Wood, 1998; Triandis, 1977).

Few previous research studies have found additional predictive ability of intention to behavior by inclusion of constructs such as self-identity (Sparks and Guthrie, 1998),

anticipated regret (van der Pligt and deVries, 1998), affect

(Manstead and Parker, 1995), and moral norms (Conner and

Armitage, 1998). Campbell (1963) traced the discrepancy to

individual’s dispositions e individuals with moderate dispositions respond favorably in the hypothetical context but unfavorably in the more demanding real context. Furthermore,

behavioral intention to predict specific behavior may depend

on “individual difference” factors or personality traits (Wong and Sheth, 1985). A combination of personality traits helps

to narrow the discrepancy between intention and behavior by

increasing predictive ability of intention on user’s behavior

(Corner and Abraham, 2001; Courneya et al., 1999; Rhodes and

Courneya, 2003). Various personality factors have been suggested as possible moderators of the intention-behavior

relationship, such that certain personality traits may explain

why only some individuals will act upon their intentions.

The present study seeks to establish the role of personality

factors in determining the likelihood that an individual will or

will not follow through and act on the intent to engage in

protective behaviors. Although this has been demonstrated in

other disciplines (Meyerowitz and Chaiken, 1987), it has just

begun to be studied in the information security field. For

instance, Milne et al. (2000) recognized the role of personality

factors in influencing an individual’s perceptions of risk and

vulnerability, and therefore his or her adoption of recommended responses to threats. Warkentin et al. (2012a) explain

how the big five personality traits may influence intention to

comply with security policies. Other studies have analyzed

personality with regards to security-based decision making

(Da Veiga and Eloff, 2010; Mazhelis and Puuronen, 2007). The

IS literature has started to use personality assessment to understand users behavior and one of the widely used personality test is the “Big Five” test (Warkentin et al., 2012a; Karim

et al., 2009; Shropshire et al., 2006). Of these personality

traits considered, conscientiousness has been found to be

consistently related to intentions and behaviors (Corner and

Abraham, 2001) and is thus, the most important personality

trait in relation to behaviors (Booth-Kewley and Vickers, 1994;

Hu et al., 2008). People with higher conscientiousness are

thought to be more organized, careful, dependable, selfdisciplined and achievement-oriented (McCrae and John,

1992), adopt problem-focused rather than emotion-focused

coping responses (Watson and Hubbard, 1996) and are less

likely to use escape-avoidance strategies (O’Brien and

Delongis, 1996). Information security executives with a

higher degree of conscientiousness incline to react more

cautiously to a given situation (Li et al., 2006). Similarly,

agreeableness has been found to have significant influence on

individual concern for information security and privacy

(Korzaan and Boswell, 2008). Individuals with agreeableness

traits are worried about what others would think of them and

are more likely to be concerned about privacy issues (Brecht

et al., 2012). Previous research has found agreeableness and

conscientiousness to predict organizational citizenship behaviors such as following rules and procedures when behavior

is not monitored (Rogelberg, 2006; Organ and Paine, 1999;

Podsakoff et al., 2000). Konovsky and Organ (1996) used

agreeableness and conscientiousness as two of the big five

personalities that would predict satisfaction and some forms

of organizational citizenship behavior. The choice of these

conscientiousness and agreeableness to study the intentionbehavior relationship for this paper is theoretically justified.

Moreover, the other three traits are not conceptually linked to

secure behaviors.

For the present study, the participants were shown a webbased tool that can provide useful information regarding security risks, and were informed that they could visit the

website later from their own computer to assess its c o m p u t e r s & s e c u r i t y 4 9 ( 2 0 1 5 ) 1 7 7 e1 9 1 vulnerabilities. Besides connecting self-reported behavioral

intent with actual security program usage behavior, this study

established the role of personality in moderating the former

relationship. Specifically, conscientiousness and agreeableness were shown to lead to increased usage behavior among

those who reported intent to adopt this security software. 2. Theoretical background 2.1. Endpoint security The greatest threat to information security lies not beyond the

security perimeter (hackers, malware, etc.), but rather with

the careless or malicious actions of internal users such as

employees and other trusted constituents with easy access to

organizational

information

resources

(Willison

and

Warkentin, 2013; Pfleeger and Caputo, 2012; Posey et al.,

2011; Warkentin and Willison, 2009; Capelli et al., 2006). Each

individual end user represents an endpoint in a computer

network or a system and without security-compliant behaviors on the part of each end user, the network will not be

secure. Secure behaviors include making regular backups,

changing passwords, scanning for viruses, and many other

activities identified by Whitman (2003) and others. Other security activities include updating applications, installing

patches, turning off unnecessary ports, and configuring firewalls (Rosenthal, 2002; Stanton et al., 2003; Whitman, 2003).

There are salient differences between information security

software usage and usage of other information technologies.

In contrast to productivity-enhancing technology such as

email utilities or spreadsheet applications, the benefits associated with security software are not immediately evident

(Warkentin et al., 2004). Rather than providing a clear functionality for daily workplace activity, security software’s

benefits often go largely unnoticed. Information security tools,

such as anti-spyware programs or biometric access controls,

provide a means of controlling computing environments or

maintaining a healthy technological baseline from which to

employ productivity enhancing technologies. Therefore, performance benefits may not be explicitly recognized by end

users. In addition, many end users lack the ability to appraise

security risks and identify appropriate countermeasures

(Adams and Sasse, 1999; Furnell et al., 2002; Siponen, 2001).

The burden falls upon IT managers, information security

specialists, and software designers to understand and predict

problems related to endpoint security, and to address the

sources of threats in an appropriate manner. Towards a better

understanding of end user behaviors, the dependent variable

of interest is initial use (adoption) of information security

software by individual end users. 2.2. Attitude Following Fishbein and Ajzen’s seminal Theory of Reasoned

Action (1975), many behavioral studies have used attitude to

explain behavioral intentions (Karahanna et al., 2006). Within

the information systems field, this theoretical foundation has

been extended to predict behavioral intent to adopt and use of

various information technologies (Assadi and Hassanein, 179 2010). The Technology Acceptance Model (TAM) (Davis,

1989), one of the most widely applied and cited models in

the field, is comprised of two independent variables: perceived

usefulness (PU) and perceived ease of use (PEOU) (Davis, 1989).

PU is defined as “the degree to which a person believes that

using a particular system would enhance his or her job performance,” while PEOU is “the degree to which a person believes that using a particular system would be free of effort.”

PU and PEOU were selected as antecedents of adoption

behavior in this research for three reasons. First, although the

two constructs were originally developed to explain adoption

of spreadsheet software, they have also been applied to many

other information technologies with much success (Bagozzi,

2007; Hirschheim, 2007; Karahanna et al., 2006; Venkatesh

et al., 2007; Wang and Benbasat, 2007). They have also been

referenced in a variety of disciplines outside of information

systems (Duxbury and Haines, 1991). Finally, the TAM model

is more parsimonious than later models, such as the Unified

Theory for the Acceptance and Use of Technology (UTUAT)

(Venkatesh et al., 2003).

A third attitudinal construct, perceived organizational

support (POS) was included in the research model. POS hails

from the organizational citizenship behavior research stream,

and is defined as the degree to which an individual believes

that the organization values his or her contribution and cares

about his or her well-being (Eisenberger et al., 1986). There has

been very limited research on perceived organizational support (POS) as a direct antecedent of IS security compliance,

though IS research has been using organizational support as a

control variable. It has been used to predict a range of

employee organizational citizenship behaviors (Peele, 2007),

including the adoption and use of information technology

(Reid et al., 2008). Greene and D’Arcy (2010) analyzed the influence of employee-organization relationship factors such as

POS on the decision of users’ IS security compliance. Organizational motivational factors such as job satisfaction and POS

all have positive impact on security compliance intention

(D’Arcy and Greene, 2009). POS differs from PEOU and PU in

that it concerns individual perceptions of the organization,

not the technology. Previous studies have stated that employees who perceive support from the organization take it as

a commitment of the organization towards them and pay it

through commitment towards the organization such as

focusing on organizational goals and policies (Eisenberger

et al., 1986; Rhoades and Eisenberger, 2002). Because of its

wide range of applications, and because it represents an

additional dimension of end user attitude, POS was included

in the research model. 2.3. Personality Personality traits have long been used to explain various

behavioral outcomes (Bosnjak et al., 2007; Funder, 1991; James

and Mazerolle, 2002). Within information systems research,

personality constructs have been used in various capacities,

including system use (Klein et al., 2002; Pemberton et al., 2005;

Vance et al., 2009; Kajzer et al., 2014). Further, Burnett and

Oliver (1979), for example, observed that personality, product usage, and socio-economic variables moderate the effectiveness of attitudes on use behavior. Because of the potential 180 c o m p u t e r s & s e c u r i t y 4 9 ( 2 0 1 5 ) 1 7 7 e1 9 1 increase in predictive power, the psychological constructs

conscientiousness and agreeableness were used in this research

to provide an improved understanding of adoption and use

security software (Chenoweth et al., 2007; Devaraj et al., 2008;

Shropshire et al., 2006; Vance et al., 2009). Both constructs

stem from the Five Factor Model of personality as defined by

John and Srivastava (1999). These two were specifically chosen

because they were found to be highly relevant factors in

contexts similar to organizational information security, such

as precaution adoption, safety, and other related domains

(Geller and Wiegand, 2005; Ilies et al., 2006). Cellar et al. (2001)

found conscientiousness and agreeableness as the two most

influencing personality types in workplace environment. Also,

previous studies have shown conscientiousness and agreeableness as better predictors of organizational citizenship

behaviors such as following rules and procedures when

behavior is not monitored (Rogelberg, 2006; Organ and Paine,

1999; Podsakoff et al., 2000). Konovsky and Organ (1996) also

choose conscientiousness and agreeableness as the two most

important personality types to predict satisfaction and organizational citizenship behavior in work environment.

The personality factor conscientiousness is described as

“socially prescribed impulse control that facilitates task and

goal-oriented behavior, such as thinking before acting,

delaying gratification, following norms and rules, and planning, organizing, and prioritizing tasks.” Several behavioral

studies have identified a significant inverse relationship between accident involvement and conscientiousness (Cellar

et al., 2001). Individuals who rate themselves as higher in

delaying gratification, thinking before acting, following norms

and rules, and planning and organizing tasks were less likely

to be involved in accidents than those who rated themselves

as lower on the same attributes (Geller and Wiegand, 2005).

Agreeableness is defined as “contrasting a pro-social and

communal orientation towards others with antagonism, and

including traits such as altruism, tender-mindedness, trust

and modesty.” As with conscientiousness, agreeableness has

been found to have a significant relationship with work safety,

accident involvement, and organizational citizenship (Cellar

et al., 2001; Ilies et al., 2006); those with stronger interpersonal orientations are more likely to agree to adopt safety

recommendations. 3. Research hypotheses The present study investigates the relationship between attitudes, personality, and the initial use (adoption behavior) of

information security software (see Fig. 1). First, the relationship between the attitudinal constructs (perceived ease of use,

perceived usefulness, and perceived organizational support)

and adoption intention is confirmed. Then, the effects of

adoption intention, conscientiousness, and agreeableness on

initial use are explored. Specifically, it is purported that the

personality constructs moderate the relationship between

intent and use.

The first three hypotheses correspond with the attitudinal

variables. Perceived Usefulness (PU) is “the degree to which a

person believes that using a particular system would enhance

his/her job performance” (Davis, 1989). Previous studies show Fig. 1 e Research model. that behavioral intention to use an Information System is

largely driven by perceived usefulness (Davis, 1989, 1993;

Straub, 2009; Fu et al., 2006). Perceived Ease of Use (PEOU) is

the individual’s assessment of the mental effort involved in

using a system (Davis, 1989). Prior research indicates that

perceived ease of use is a significant determinant of behavioral intention to use information technology (Gefen and

Straub, 2000; Davis et al., 1989, 1992). Similarly, TAM2 and

TAM3, which are expansions of Technology Acceptance Model

(TAM) show PU and PEOU both affecting the behavioral

intention to use a technology (Venkatesh and Davis, 2000;

Venkatesh and Bala, 2008). The roles of perceived usefulness

and perceived ease of use on IS security adoption have also

been studied regularly in the past (Lee and Kozar, 2008; Lu

et al., 2005). An individual’s intention to adopt security software has been regularly linked to usefulness of the security

software and its ease of use. Thus, it is hypothesized that:

H1. perceived ease of use is positively associated with intention to adopt security software.

H2. perceived usefulness is positively associated with intention to adopt security software.

Perceived Organizational Support (POS) strengthens the

belief that the organization recognizes and rewards expected

behavior, which in return encourages employees to be dedicated and loyal to the organization and its goal (Rhoades and

Eisenberger, 2002). There have been numerous studies that

have found a positive relationship between POS and employees’ willingness to fulfill conventional job responsibilities

that typically are neither formally rewarded nor contractually

enforceable (Settoon et al., 1996). In IS field, perceived organizational support has been shown to have a positive impact

on security compliance intention of the employees (D’Arcy

and Greene, 2009). Therefore, this study posits the following:

H3. perceived organizational support is positively associated

with intention to adopt security software.

The correlation between adoption intention and initial

software use is also of interest. In the past, technology adoption studies have focused mainly on behavioral intent without

actually measuring initial use. While there have been abundant IS research studies that have measured intention of

people to comply or violate norms, laws or policies, there have

been very few studies that have measured actual behavior of

the users because of the level of difficulty in its measurement c o m p u t e r s & s e c u r i t y 4 9 ( 2 0 1 5 ) 1 7 7 e1 9 1 (Warkentin et al., 2012b). Recent findings have questioned the

strength of the relationship between behavioral intent and

behavior outcome in various situational contexts (Abraham

et al., 1999; Norman et al., 2003; Paulin et al., 2006). As such,

it is necessary to test the relationship between adoption

intention and initial use of security software:

H4. adoption intention is positively associated with initial use

of security software.

Although intentions are commonly used to predict

behavioral outcomes, dispositional factors such as personality

may account for even more variance (Ilies et al., 2006;

Karahanna et al., 1999; Mowen et al., 2007; Zhang et al.,

2007). Personality has been theorized to significantly impact

the relationship between intentions and behaviors, although

few studies have yielded conclusive evidence (Ajzen, 2005;

Endler, 1997; Gountas and Gountas, 2007). Therefore, this

research investigates the role of personality as a moderator of

the intentionebehavior relationship:

H5. the higher the level of conscientiousness, the stronger the

relationship between adoption intention and initial use of

security software.

H6. the higher the level of agreeableness, the stronger the

relationship between adoption intention and initial use of

security software. 4. Method 4.1. Procedure Subjects were introduced to a new web-based security program, called Perimeter Check, in a twenty minute presentation (see Fig. 2). Perimeter Check is unique in that it provides

security measures that are not commercially available. It analyzes the user’s computing environment, identifies potential

vulnerabilities, and recommends actions that might improve

the safety level for various computer activities (See Appendix

A for a more complete description of this security program).

Because it is web-base…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

Reducing Risky Security Behaviours_Utilising Affective Feedback to Educate Users.pdf

Future Internet 2014, 6, 760-772; doi:10.3390/fi6040760

OPEN ACCESS future internet

ISSN 1999-5903

www.mdpi.com/journal/futureinternet

Article Reducing Risky Security Behaviours:

Utilising Affective Feedback to Educate Users †

Lynsay A. Shepherd 1,*, Jacqueline Archibald 2 and Robert Ian Ferguson 1

1 2 † School of Science, Engineering and Technology, Abertay University, Bell Street,

Dundee DD1 1HG, Scotland; E-Mail: i.ferguson@abertay.ac.uk

Dundee Business School, Abertay University, Dundee DD1 1HG, Scotland;

E-Mail: j.archibald@abertay.ac.uk

This article was originally presented at the Cyberforensics 2014 conference. Reference:

Shepherd, L.A.; Archibald, J.; Ferguson, R.I. Reducing Risky Security Behaviours:

Utilising Affective Feedback to Educate Users. In Proceedings of Cyberforensics 2014,

University of Strathclyde, Glasgow, UK, 2014; pp. 7–14. * Author to whom correspondence should be addressed; E-Mail: lynsay.shepherd@abertay.ac.uk;

Tel.: +44-(0)1382-308685.

External Editor: Mamoun Alazab

Received: 31 July 2014; in revised form: 22 October 2014 / Accepted: 6 November 2014 /

Published: 27 November 2014 Abstract: Despite the number of tools created to help end-users reduce risky security

behaviours, users are still falling victim to online attacks. This paper proposes a browser

extension utilising affective feedback to provide warnings on detection of risky behaviour.

The paper provides an overview of behaviour considered to be risky, explaining potential

threats users may face online. Existing tools developed to reduce risky security behaviours

in end-users have been compared, discussing the success rates of various methodologies.

Ongoing research is described which attempts to educate users regarding the risks and

consequences of poor security behaviour by providing the appropriate feedback on the

automatic recognition of risky behaviour. The paper concludes that a solution utilising a

browser extension is a suitable method of monitoring potentially risky security behaviour.

Ultimately, future work seeks to implement an affective feedback mechanism within the

browser extension with the aim of improving security awareness. Future Internet 2014, 6 761 Keywords: usable security; end-user security behaviours; affective computing;

user monitoring techniques; affective feedback; security awareness 1. Introduction

A lack of awareness surrounding online behaviour can expose users to a number of security flaws.

Average users can easily click on malicious links, which are purportedly secure; a fact highlighted by

the number of users who have computers infected with viruses and malware [1]. This paper aims to

identify potential security issues users may face when browsing the web such as phishing attempts and

privacy concerns. Techniques developed to help educate users regarding their security awareness have

been reviewed, comparing the methods used to engage users, discussing the potential flaws in such tools.

Previous research has indicated affective feedback may serve as a successful method of educating users

about risky security behaviours [2–4], thus, improving system security. The paper proposes the use of a

browser extension to monitor users actions and detect risky security behaviour. Future work seeks to

utilise affective feedback to improve the security awareness of end-users, with a view to improving

overall system security.

2. Background

Risky security behaviour exhibited by end-users has the potential to leave devices vulnerable to

compromise [5]. Security tools are available, such as firewalls and virus scanners, which are designed

to aid users in defending themselves against potential online threats however, these tools cannot stop

users engaging in risky behaviour. End-users continue to engage in risky behaviour indicating that the

behaviour of users needs to be modified, allowing them to consider the security implications of their

actions online. This section explores the definition of risky security behaviour, the role of affective

feedback and outlines potential threats users may face when browsing the web.

2.1. Risky Security Behaviour

What constitutes risky behaviour is not necessarily obvious to all end-users and can be difficult to

recognise. There are multiple examples of behaviour which could be perceived as risky in the context of a

browser-based environment, e.g., creating weak passwords or sharing passwords with colleagues [6,7],

downloading data from unsafe websites [8] or interacting with a website containing coding

vulnerabilities [9].

Several pieces of research have been conducted in an attempt to define and categorise security

behaviour. One such attempt was documented in a 2005 paper by Stanton et al. [6] where interviews

were conducted with IT and security experts, in addition to a study involving end-users in the US, across

a range of professions. The findings produced a taxonomy consisting of 6 identified risky behaviours:

Intentional destruction (e.g., hacking into company files, stealing information), detrimental misuse,

dangerous tinkering naïve mistakes (perhaps choosing a weak password), aware assurance and basic

hygiene. Conversely, in 2012, Padayachee [10] developed a taxonomy, categorising compliant security

behaviours whilst investigating if particular users had a predisposition to adhering to security behaviour. Future Internet 2014, 6 762 The results of the research highlighted elements, which may influence security behaviours in users,

e.g., extrinsic motivation, identification, awareness and organisational commitment.

The scope of behaviour pertaining to this paper relates to general user behaviour, concentrating on

user interaction with a web browser. Users face a number of threats online, as discussed in Section 2.3

and may have to deal with these threats in both a home-based or organisational environment.

2.2. Affective Feedback

Affective computing is defined as “computing that relates to, arises from, or deliberately influences

emotions” [11]. There are a variety of feedback methods which are considered to be affective. Avatars

can provide affective feedback and have been seen to be beneficial in educational environments [2–4].

Robison et al. [3] used avatars in an intelligent tutoring system to provide support to users, noting that

such agents have to decide whether to intervene when a user is working, to provide affective feedback.

The work highlighted the danger that if an agent intervenes at the wrong time, this may cause a negative

impact on how the user learns using the tool.

Work conducted by Hall et al. [4] also advocated the use of avatars in providing affective feedback

and how they can influence the emotional state of the end-user. Research conducted deployed avatars in

a personal social and health education environment, educating children about bullying. Results showed

the avatars generated an empathetic effect in children, indicating that the same type of feedback could

be used to achieve a similar result in adults.

Textual information with the use of specific words also has the potential to alter a user’s

state/behaviour, e.g., a password may be described as “weak” and this can encourage them to create a

stronger password [12]. Dehn and Van Mulken conducted an empirical review of ways in which

animated agents could interact with users, and compared avatars against textual information as an

affective feedback method. They considered that whilst textual information could provide more direct

feedback to users, avatars could be used to provide more subtle pieces of information via gestures or eye

contact. Overall, it was noted multimodal interaction could provide users with a greater level of

feedback [13]. Colour is also often utilised, with green or blue used to imply a positive occurrence, with

red indicating a negative outcome [12]. A combination of sounds, colours and dialogues provided a

calming mechanism in a game named “Brainchild” [2] which was designed to help users relax,

highlighting the effectiveness of a multimodal approach.

2.3. Potential Threats

Whilst users are browsing the web, there are a number of security issues they may potentially be

subjected to. In addition to breaking the law, should users download illegal files such as pirated movies

or software, they are also engaging in risky security behaviour, placing their system at risk. The files

downloaded may contain viruses or malware [8].

Interaction with websites featuring coding vulnerabilities is also risky and users are generally unaware

of such flaws [14]. If an application is poorly constructed, users may expose themselves to an attack by

simply visiting a site, e.g., vulnerability to XSS attacks or session hijacking. Cross-site scripting (XSS)

attacks are common on the web and may occur where users have to insert data into a website,

e.g., a contact form. Attacks related to social engineering are also linked to technology flaws. Often, Future Internet 2014, 6 763 users divulge too much information about themselves on social networking sites [1], e.g., it is possible

to extract geolocation data from a specific Twitter account to establish the movements of a user. Such

patterns have the potential to highlight the workplace or home of a user. An attacker could target a user,

gathering the information shared to produce a directed attack against the victim e.g., sending the victim

an email containing a malicious link about a subject they are interested in [9]. Sending a user an email

of this type is known as a phishing attack (a spear phishing attack when it is targeted towards specific

users). The malicious link contained within the email may link to a site asking users to enter information

such as bank account details. As such, many average users would fail to identify a phishing email,

potentially revealing private information [15,16]. The rise in spear phishing attacks has led the FBI to

warn the public regarding this issue [17].

Perhaps one of the most common risky security behaviours involves the misuse of passwords for

online accounts which link to personal information. There can be a trade off between the level of security

of a password provides and its usability [7]. Passwords, which are shorter, are less secure however, they

are easier for users to remember and are therefore usable. Users may also engage in the practice of

sharing passwords. When Stanton et al. [6] interviewed 1167 end-users in devising a taxonomy of risky

behaviours, it was found that 23% of those interviewed shared their passwords with colleagues.

27.9% of participants wrote their passwords down.

These are just a sample of the attacks users may be subjected to whilst browsing the web on a daily

basis. Security tools such as virus scanners and anti-malware software can aid users if their machines

have been infected with malicious software. If users are educated regarding risky security behaviour,

this may prevent their machines from becoming infected in the first instance. A considerable amount of

research has been conducted into educating and helping users understand risky security behaviour online,

and Section 3 discusses varying approaches.

3. Analysis

This section explores previous research, providing an overview of methods which have been

developed in an attempt to keep users safer online. Solutions created to reduce specific types of attack

will be discussed, highlighting potential issues these tools fail to resolve.

3.1. Keeping Users Safe and Preventing Attacks

Many users participate in risky security behaviour, particularly when it involves passwords, as

highlighted by Stanton et al. [6]. A number of attempts have been made to understand the problems users

face when dealing with passwords, with tools developed to aid users. Furnell et al. [18] conducted a

study in 2006, to gain an insight into how end-users deal with passwords. The survey found that 22% of

participants said they lacked security awareness, with 13% of people admitting they required security

training. Participants also found browser security dialogs confusing and in some cases, misunderstood

the warnings they were provided with. The majority of participants considered themselves as above

average in terms of their understanding of technology, yet many struggled with basic security. As result

of confusion in end-users, a number of studies have been conducted in an attempt to improve users

security awareness in terms of passwords. Future Internet 2014, 6 764 Bicakci et al. [19] explored the use of using graphical passwords built into a browser extension, based

on the notion that humans are better at memorising images than text. The aim of the software developed

was to make passwords more usable, decreasing the likelihood of users engaging in risky security

behaviour. Participants could select five points on an image with a grid overlay to produce a password,

which was compared against previous research conducted with plain images. Results from the study

showed the grid had little effect on the password chosen however, in a survey of end-users, the grid

proved to be more successful than an image without a grid in terms of usability when rated using a

Likert scale.

To demonstrate the strength of a chosen password, Ur et al. [12] investigated how strength meters

placed next to password fields improved the security and usability of passwords. Participants were asked

to rate their password security perceptions on a Likert scale. Immediately after creating a password with

the aid of a meter, they were surveyed regarding their opinion of the tool. The tool was deemed to be a

useful aid in password creation with participants noting that use of words such as “weak” encouraged

them into creating a stronger password. However, the study was repeated the following day and between

77% and 89% (depending on the different groups) were able to recall their passwords, which fails to

sufficiently test the memorability of a password at a much later date. Additionally, 38% of participants

admitted to writing down their password from the previous day, highlighting that despite the

encouragement of the password meter, complex passwords are still difficult to remember.

Much of the research conducted into keeping users safe online, educating them about risky security

behaviour revolves around phishing attacks. Recently, a number of solutions have been developed to

gauge how best to inform users about the dangers of phishing attacks, with the hope that education will

reduce participation in risky security behaviours.

Dhamija and Tygar [20] produced a method to enable users to distinguish between spoofed websites

and genuine sites. A Firefox extension was developed which provided users with a trusted window in

which to enter login details. A remote server generated a unique image which is used to customise the

web page the user is visiting, whilst the browser detects the image and displays it in the trusted window,

e.g., as a background image on the page. Content from the server is authenticated via the use of the

secure Remote Password Protocol. If the images match, the website is genuine and provides a simple

way for a user to verify the authenticity of the website.

Sheng et al. [21] tried a different approach to reducing risky behaviour, gamifying the subject of

phishing with a tool named Anti-Phishing Phil. The game involves a fish named Phil who has to catch

worms, avoiding the worms, on the end of fishermen’s hooks (these are the phishing attempts). The

study compared three approaches to teaching users about phishing: playing the Anti-Phishing Phil game,

reading a tutorial developed or reading existing online information. After playing the game, 41% of

participants viewed the URL of the web page, checking if it was genuine. The game produced some

unwanted results in that participants became overly cautious, producing a number of false-positives

during the experimental phase.

PhishGuru is another training tool designed by Kumaraguru et al. [22] to discourage people from

revealing information in phishing attacks. When a user clicks on a link in a suspicious email, they are

presented with a cartoon message, warning them of the dangers of phishing, and how they can avoid

becoming a victim. The cartoon proved to be effective: participants retained the information after Future Internet 2014, 6 765 28 days. The tool didn’t cause participants to become overly cautious and they continued to click on

links in genuine emails however, a longer study is required.

Information that allows phishing emails to be targeted towards specific users can come from revealing

too much information online. A proposed series of nutrition labels for online privacy have been designed

in an effort to reduce risky behaviour [23]. While it has been shown users don’t fully understand privacy

policies online, the nutrition labels seek to present the information in a format that is easier for users to

understand. Labels were designed using a simplified grid design with a series of symbols representing

how a site utilises data: how it is collected and used, and whether data is required (opt-in or opt-out).

Results from a small study found that visually, the labels were more interesting to read than a traditional

security policy and presented an easier way for users to find information.

Besmer et al. [24] acknowledged that various applications may place users at risk by revealing

personal information. A tool was developed and tested on Facebook to present a simpler way of

informing the user about who could view their information. A prototype user interface highlighted the

information the site required, optional information, the profile data the user had provided and the

percentage of the users friends who could see the information entered. The study showed that those who

were already interested in protecting their information found the interface useful in viewing how

applications handled the data.

In addition to security tools which have been developed to target privacy issues on social networking

sites, studies have also focussed on more general warning tools when the user is browsing the web. A

Firefox extension developed by Maurer [25] attempts to provide alert dialogs when users are entering

sensitive data such as credit card information. The extension seeks to raise security awareness, providing

large JavaScript dialogs to warn users, noting that the use of certain colours made the user feel

more secure.

3.2. Issues with Traditional Security Tools and Advice

Some of the tools discussed in Section 3.1 provided unwanted results, in particular, studies found

that, users became overly cautious when browsing the web and produced a number of false positive

results when detecting phishing attacks [21]. Another study highlighted that although the tool developed

for submitting private information online performed well in experiments, it was difficult to encourage

users to make use of it. Instead, several participants continued to use web forms, which they were more

familiar with [26].

Many of the tools created focus on one specific area where users are vulnerable, e.g., they educate

people about privacy, passwords or phishing attempts. Despite the number of tools created and designed

to help protect users online, users continue to engage in risky security behaviour, placing their

information and devices at risk. The tools developed span a number of years, indicating that the issue of

risky security behaviour has yet to be resolved. There are a multitude of common threats online,

highlighted in Section 2.3, and there is a requirement that newer tools focus on more than one potential

threat area. Future Internet 2014, 6 766 4. Methodology

The research outlined in this section proposes the use of a browser extension to automatically detect

risky security behaviour, taking a number of different threats into consideration. Future work seeks to

explore the possibility of utilising an affective feedback mechanism in enhancing security risk awareness

on detection of risky behaviour within the browser.

4.1. Proposed System Overview

The research proposed seeks to develop a software prototype, in the form of a Firefox browser

extension, which monitors user behaviour. The prototype will contain feedback agents, several of which

will utilise affective feedback techniques. Should the user engage in potentially risky security behaviour

whilst browsing, e.g., entering a password or credit card number into a form, an affective feedback

mechanism will trigger, warning users regarding the dangers of their actions. Feedback mechanisms

have been explored in previous research and will include colour-based feedback (e.g., green indicating

good behaviour), text-based feedback using specific terms and avatars using subtle cues within the

browser window [27]. Experiments using these agents will investigate (a) if security risk awareness

improves in end-users; and (b) if overall system security improves through the use of affective feedback.

The success of the software will be gauged via a series of end-user experiments followed by a

questionnaire utilising a Likert scale. Figure 1 attempts to summarise how the software prototype

(browser extension) will work. When the user is interacting with a web browser, the tool will monitor

these interactions, and compare them to a knowledge base of known risky behaviours. If a risky

behaviour is detected, an affective feedback agent will be triggered, providing suitable feedback to the

end-user in an attempt to raise awareness of risky behaviour.

Figure 1. Overview of system architecture. Future Internet 2014, 6 767 4.2. Technical Details

Following a comparison between XUL-based Firefox extensions (XML User Interface Language)

and those created by Mozilla’s Add-on SDK, a prototype solution was constructed using a XUL-based

extension. This method allows for extensive customisation of the user interface, which a tool of this type

requires and additional functionality can be gained via the links to the XPCOM (cross platform

component object model) [28]. When developing Firefox extensions to capture user behaviour and

provide feedback to users, a number of files are required. Extensions follow the same basic structure,

with several files, which must be included. In terms of modifying an extension in an attempt to monitor

user behaviour and provide cues to modify the behaviour, particular files are very important.

The browser.xul file within the content folder contains a number of links to other required files and

is essentially the foundation for the whole extension. This file has the ability to link to JavaScript files,

including the jQuery library, should it need to be embedded in an extension. The file also allows

additional XUL constructs to be added, allowing the menus and toolbars within Firefox to be modified

e.g., adding a link into a menu to allow the user to run an extension.

Another file, which can be modified extensively, is the JavaScript file within the content folder.

It can call a number of functions, including referencing the jQuery library and can make use of the

Mozilla framework. The file can manipulate the DOM (document object model) of the website displayed

in the browser e.g., attach event listeners to all links on a page or modify anchor tags. Additionally,

the file can utilise AJAX, passing data back and forth between a web server and the JavaScript file.

To provide a full example of how a Firefox extension may be developed to monitor user behaviour

and provide appropriate feedback, details of the Link Detector extension are outlined (Figure 2). The

Link Detector extension is designed to warn users about malicious links. When the user starts the Firefox

extension, the browser.xul file makes a call to the JavaScript file to run the initial function. The DOM is

then manipulated, using JavaScript to add event listeners to all links on a given website. If a user

approaches a link with the cursor, an event is triggered. JavaScript passes the link value to a PHP script

via AJAX, and is checked against a list of known malicious links. The list of malicious links is sourced

from a third-party database, which is managed and updated by Malwarebytes, the company with the

anti-malware tool of the same name [29]. The AJAX request then returns a value indicating if the link

is known to be malicious. If the link is flagged as being potentially dangerous, the JavaScript file then

manipulates the DOM, highlighting the malicious link in red. This is repeated for each link a

user approaches.

The Link Detector is a small prototype browser extension, exploring the possibility of raising

awareness regarding risky security behaviours in end-users via the use of affective feedback. As such,

this may aid in preventing users revealing information to websites, which have been hijacked via an XSS

attack. The final prototype developed will not be restricted to scanning for dangerous links only…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

Studying the impact of security awareness efforts on user behavior.pdf

STUDYING THE IMPACT OF SECURITY AWARENESS EFFORTS ON USER

BEHAVIOR A Write my thesis – Dissertation Submitted to the Graduate School

of the University of Notre Dame

in Partial Fulfillment of the Requirements

for the Degree of Doctor of Philosophy by

Dirk C. Van Bruggen Aaron Striegel, Director Graduate Program in Computer Science and Engineering

Notre Dame, Indiana

March 2014 UMI Number: 3583071 All rights reserved

INFORMATION TO ALL USERS

The quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript

and there are missing pages, these will be noted. Also, if material had to be removed,

a note will indicate the deletion. UMI 3583071

Published by ProQuest LLC (2014). Copyright in the Write my thesis – Dissertation held by the Author.

Microform Edition © ProQuest LLC.

All rights reserved. This work is protected against

unauthorized copying under Title 17, United States Code ProQuest LLC.

789 East Eisenhower Parkway

P.O. Box 1346

Ann Arbor, MI 48106 – 1346 c Copyright by Dirk Van Bruggen

2014

All Rights Reserved STUDYING THE IMPACT OF SECURITY AWARENESS EFFORTS ON USER

BEHAVIOR Abstract

by

Dirk C. Van Bruggen

Security has long been a technical problem with technical solutions. Over time,

it has become apparent that human behavior is a major weakness in technical solutions. Extensive efforts have been taken to inform individuals about the threats and

safeguards with which to protect against such threats. Organizations have developed

awareness campaigns to enhance the security behaviors of employees. These awareness campaigns seek to provide employees with information about a threat as well as

measures to take to prevent against the threats.

This dissertation investigates the effectiveness of various security awareness message themes as well as the individual perceptions and characteristics that affect security behavior. First, a survey study is conducted which measures perceptions surrounding security threats and safeguards. The analysis of the survey data builds a

foundational understanding of how individuals assess and respond to technical security threats. Next, five awareness themes are evaluated through the use of targeted

interventions with non-complying individuals presented awareness messages. The individual responses to interventions and surveys allow for the usage of personality data

to inform both initial security safeguard behavior as well as response behavior to targeted awareness messages. Overall, the tested awareness methods were found to be

somewhat effective. However, with the addition of individual information, analysis Dirk C. Van Bruggen

identified correlations with individual response. These correlations point to the importance of considering individual motivations and perceptions surrounding security

threats and safeguards. Dedication To Nichole, thank you for being a wonderful wife and friend. To my mother, thank

you for always encouraging me to ask questions and search for answers. ii CONTENTS FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi TABLES ix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 1: INTRODUCTION

1.1 The Need for Security .

1.2 Raising Awareness . . .

1.3 Methods . . . . . . . . .

1.4 Contributions . . . . . .

1.5 Summary . . . . . . . . .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. CHAPTER 2: BACKGROUND . . . . . . . . . .

2.1 Technical Security . . . . . . . . . . . . .

2.1.1 Mobile Devices . . . . . . . . . .

2.1.2 Limitations of Technical Security

2.2 Human Security . . . . . . . . . . . . . .

2.2.1 Behavioral Models . . . . . . . .

2.2.2 Personality Traits . . . . . . . . .

2.3 Usable Security . . . . . . . . . . . . . .

2.4 Awareness Messages: Current Techniques

2.5 Usable Security . . . . . . . . . . . . . .

2.6 Summary . . . . . . . . . . . . . . . . . .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

Used by

. . . . .

. . . . . CHAPTER 3: RISK PERCEPTION AND BEHAVIOR

3.1 Introduction . . . . . . . . . . . . . . . . . . . .

3.2 Contributions . . . . . . . . . . . . . . . . . . .

3.3 Threat and Safeguard Scenarios . . . . . . . . .

3.4 Studied Perceptions . . . . . . . . . . . . . . . .

3.5 Survey Setup . . . . . . . . . . . . . . . . . . .

3.6 Results . . . . . . . . . . . . . . . . . . . . . . .

3.6.1 Psychometric Factors . . . . . . . . . . .

3.6.2 Avoidance Factors . . . . . . . . . . . .

3.6.3 Risk Propensity . . . . . . . . . . . . . .

3.6.4 Age . . . . . . . . . . . . . . . . . . . .

3.6.5 Risk Perception and Behavior . . . . . .

iii .

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

. 1

1

3

5

6

8 . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

Organizations

. . . . . . . .

. . . . . . . . .

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

. 9

9

10

15

16

19

24

26

28

38

39 .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. 41

41

42

43

45

48

50

52

60

68

70

72 .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

. 3.7 3.6.6 Technology Familiarity and Risk Perception . . . . . . . . . .

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 4: PHONE LOCKING . . .

4.1 Introduction . . . . . . . . . . .

4.2 Study Population . . . . . . . .

4.3 Android Screen Locks . . . . . .

4.4 Data Collection Framework . .

4.5 Initial Observations . . . . . . .

4.6 Survey Study . . . . . . . . . .

4.7 Targeted Interventions . . . . .

4.8 Targeted Intervention Results .

4.8.1 Demographics . . . . . .

4.8.2 Regressed Behavior . . .

4.8.3 Prior Behavior . . . . .

4.8.4 Usage Data . . . . . . .

4.8.5 Personality Differences .

4.8.6 Social Tie Relationships

4.8.7 Change Over Time . . .

4.8.8 Homework help – Discussion . . . . . . . .

4.9 Risk Perceptions . . . . . . . .

4.10 Summary . . . . . . . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. CHAPTER 5: MOBILE ANTIVIRUS . . .

5.1 Introduction . . . . . . . . . . . . .

5.2 Data Collection Framework . . . .

5.3 Initial Behaviors . . . . . . . . . .

5.4 Survey Responses . . . . . . . . . .

5.5 Targeted Interventions . . . . . . .

5.5.1 Message Themes . . . . . .

5.5.2 Modes of Communication .

5.6 Results . . . . . . . . . . . . . . . .

5.6.1 Relapsed Behavior . . . . .

5.6.2 Usage and Observed Change

5.6.3 Demographic Comparisons .

5.6.4 Peers and Change . . . . . .

5.6.5 Personality and Change . .

5.7 Risk Perceptions . . . . . . . . . .

5.8 Summary . . . . . . . . . . . . . .

CHAPTER 6: CONCLUSIONS

6.1 Conclusions . . . . . .

6.2 Contributions . . . . .

6.3 Business Takeaways . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. 78

78

82

84

85

86

95

100

104

104

107

108

108

110

111

111

114

115

115 . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

in Behavior

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . . .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. 117

117

120

121

125

128

129

133

135

144

146

148

152

153

154

155 .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. .

.

.

. 157

157

158

160 AND FUTURE

. . . . . . . . .

. . . . . . . . .

. . . . . . . . . iv .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. 75

77 WORK

. . . . .

. . . . .

. . . . . 6.4 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . 161 APPENDIX A: PERCEPTION APPENDIX

A.1 Demographics . . . . . . . . . . . .

A.2 Behavior Specific Questions . . . .

A.3 Familiarity and Knowledge . . . . .

A.4 Risk Propensity Scale . . . . . . . . .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. .

.

.

.

. 164

164

164

168

170 APPENDIX B: SCREEN LOCKING APPENDIX . . . . . . . . . . . . . . . . 171

B.1 Intervention Messages . . . . . . . . . . . . . . . . . . . . . . . . . . 171

BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 v FIGURES 2.1 Theory of Planned Behavior [1]. . . . . . . . . . . . . . . . . . . . . . 21 2.2 Protection Motivation Theory [2]. . . . . . . . . . . . . . . . . . . . . 22 2.3 Technology Threat Avoidance Model [3]. . . . . . . . . . . . . . . . . 23 2.4 The Anti-Smoking Campaign is an Example of a Medical Campaign

to Raise Awareness of The Dangers of Smoking . . . . . . . . . . . . 29 An Example of a Campaign to Raise Awareness of the Environmental

Effects of the Reuse of Towels in a Hotel Room from Research by

Goldstein, Cialdini, and Griskevicius . . . . . . . . . . . . . . . . . . 30 The “Loose Lips May Sink Ships” Campaign is an Example of a Military Campaign to Protect the Safety of Military Operations [4]. . . . 31 2.7 Poster Warning Against Phishing Attacks[5] . . . . . . . . . . . . . . 33 2.8 First Place Winner of 2013 EduCause Information Security Poster

Contest [6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 An Example of an Anti-File Sharing Campaign from the University of

Notre Dame [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.1 Survey Duration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.2 Participant Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3 Participant Education . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4 Participant Employment . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.5 Knowledge and Threat Comparison . . . . . . . . . . . . . . . . . . . 53 3.6 Impact and Threat Comparison . . . . . . . . . . . . . . . . . . . . . 54 3.7 Perceived Severity and Threat Comparison . . . . . . . . . . . . . . . 55 3.8 Controllability and Threat Comparison . . . . . . . . . . . . . . . . . 56 3.9 Possibility and Threat Comparison . . . . . . . . . . . . . . . . . . . 58 3.10 Awareness and Threat Comparison . . . . . . . . . . . . . . . . . . . 59 3.11 Perceived Susceptibility and Threat Comparison . . . . . . . . . . . . 60 3.12 Self-efficacy and Threat Comparison . . . . . . . . . . . . . . . . . . 61 3.13 Perceived Safeguard Effectiveness and Threat Comparison . . . . . . 63 2.5 2.6 2.9 vi 3.14 Perceived Safeguard Cost and Threat Comparison . . . . . . . . . . . 65 3.15 Perceived Threat and Threat Comparison . . . . . . . . . . . . . . . 66 3.16 Radar Chart Comparing All Factors . . . . . . . . . . . . . . . . . . . 67 3.17 Risk Propensity Scale Comparison . . . . . . . . . . . . . . . . . . . . 69 3.18 Security Behavior and Age Comparison . . . . . . . . . . . . . . . . . 72 3.19 Technology Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.1 Android Screen Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2 Percent of Gender Using Each Screen Lock . . . . . . . . . . . . . . . 89 4.3 Screen Lock vs. Previous Type of Phone . . . . . . . . . . . . . . . . 89 4.4 Screen Lock Choice Categorized by SMS Usage . . . . . . . . . . . . 92 4.5 Screen Lock Choice Categorized by Rx (Downstream) Traffic Usage . 93 4.6 Click Throughs vs. Time . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.7 Overall Change Categorized by Intervention Group and Maintained

vs. Regressed Behavior Over Intervention Study and 7 Month Follow

Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.8 Overall Change as Categorized by Prior Security Behavior . . . . . . 108 4.9 Average Personality Scores as Categorized by Response to DeterrenceBased Intervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.10 Average Personality Scores as Categorized by Response to Intervention

(All message groups included) . . . . . . . . . . . . . . . . . . . . . . 109

4.11 Cumulative Security Changes Over Time . . . . . . . . . . . . . . . . 112

4.12 Frequency of Change Over Time Categorized by Intervention Group . 114

5.1 Postcards Used for Antivirus Intervention . . . . . . . . . . . . . . . 134 5.2 E-mail Used for Antivirus Intervention . . . . . . . . . . . . . . . . . 136 5.3 Overall Change Categorized by Intervention Group . . . . . . . . . . 137 5.4 Cumulative Security Changes Over Time . . . . . . . . . . . . . . . . 140 5.5 Percent of Participants Who Opened the E-Mail and Clicked on the

Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.6 Percent of Participants Who Opened the E-Mail or Clicked on the Link

and Then Changed Behavior . . . . . . . . . . . . . . . . . . . . . . . 143 5.7 Count of Installed and Removed Antivirus Behavior Over Time . . . 145 5.8 RX Traffic vs. Change . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.9 TX Traffic vs. Change . . . . . . . . . . . . . . . . . . . . . . . . . . 149 5.10 Average Daily SMS vs. Change . . . . . . . . . . . . . . . . . . . . . 150

5.11 Screen time vs. Change . . . . . . . . . . . . . . . . . . . . . . . . . . 150

vii 5.12 Average Weekly Phone Call Time vs. Change . . . . . . . . . . . . . 151

5.13 Major vs. Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 viii TABLES 2.1 BEHAVIOR MODELS . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 TYPES OF SOCIAL NORMS . . . . . . . . . . . . . . . . . . . . . . 20 2.3 BIG FIVE PERSONALITY TRAITS . . . . . . . . . . . . . . . . . . 24 2.4 DARK TRIAD PERSONALITY TRAITS . . . . . . . . . . . . . . . 26 2.5 TYPES OF SOCIAL NORMS . . . . . . . . . . . . . . . . . . . . . . 35 3.1 THREAT AND SAFEGUARD SCENARIOS USED IN THE SURVEY STUDIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2 PSYCHOMETRIC RISK PERCEPTION FACTORS . . . . . . . . . 46 3.3 TECHNOLOGY THREAT AVOIDANCE THEORY FACTORS . . . 48 3.4 KNOWLEDGE PAIRWISE COMPARISONS USING PAIRED T-TESTS 53 3.5 IMPACT PAIRWISE COMPARISONS USING PAIRED T-TESTS . 54 3.6 SEVERITY PAIRWISE COMPARISONS USING PAIRED T-TESTS 56 3.7 CONTROLLABILITY PAIRWISE COMPARISONS USING PAIRED

T-TESTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.8 POSSIBILITY PAIRWISE COMPARISONS USING PAIRED T-TESTS 58 3.9 AWARENESS PAIRWISE COMPARISONS USING PAIRED T-TESTS 59 3.10 PERCEIVED SUSCEPTIBILITY PAIRWISE COMPARISONS USING PAIRED T-TESTS . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.11 SELF-EFFICACY SUSCEPTIBILITY PAIRWISE COMPARISONS

USING PAIRED T-TESTS . . . . . . . . . . . . . . . . . . . . . . . 62 3.12 PERCEIVED SAFEGUARD EFFECTIVENESS SUSCEPTIBILITY

PAIRWISE COMPARISONS USING PAIRED T-TESTS . . . . . . . 64 3.13 PERCEIVED SAFEGUARD COST SUSCEPTIBILITY PAIRWISE

COMPARISONS USING PAIRED T-TESTS . . . . . . . . . . . . . 65 3.14 PERCEIVED THREAT SUSCEPTIBILITY PAIRWISE COMPARISONS USING PAIRED T-TESTS . . . . . . . . . . . . . . . . . . . 68 3.15 BEHAVIORS RELATED TO SECURITY . . . . . . . . . . . . . . . 71 4.1 83 DISTRIBUTION OF INTENDED COLLEGE MAJOR . . . . . . . . ix 4.2 AVERAGE USAGE PER WEEK . . . . . . . . . . . . . . . . . . . . 87 4.3 BASELINE SCREEN LOCK USAGE DURING WEEK 2 . . . . . . 87 4.4 AVERAGE USAGE PER WEEK CATEGORIZED BY SCREEN LOCK

TYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.5 SOCIAL PEERS VS. INITIAL LOCKING BEHAVIOR . . . . . . . . 95 4.6 AWARENESS SURVEY RESPONSES . . . . . . . . . . . . . . . . . 96 4.7 PASSWORD SHARING SURVEY RESPONSES . . . . . . . . . . . 97 4.8 SELF REPORTED VS. COLLECTED USAGE OF SCREEN LOCKS 99 4.9 SUMMARY OF USAGE OVER TIME . . . . . . . . . . . . . . . . . 105 4.10 SUMMARY OF OVERALL CHANGE OBSERVED . . . . . . . . . . 106

4.11 GENDER VS. BEHAVIOR CHANGE AS OBSERVED DURING INTERVENTION STUDY AND 7 MONTH FOLLOW UP . . . . . . . 106

4.12 PEERS VS. INTERVENTION RESPONSE . . . . . . . . . . . . . . 111

4.13 LONGITUDINAL DATA FOR INTERVENTION PARTICIPANTS . 113

5.1 MOBILE ANTIVIRUS PROGRAMS FOUND INSTALLED ON PARTICIPANT PHONES . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5.2 BASELINE ANTIVIRUS USAGE: JANUARY 2013 . . . . . . . . . . 121 5.3 ANTIVIRUS USAGE VS. GENDER . . . . . . . . . . . . . . . . . . 122 5.4 ANTIVIRUS VS. PREVIOUS TYPE OF PHONE 5.5 AVERAGE USAGE PER WEEK CATEGORIZED BY USAGE OF

ANTIVIRUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.6 INDIVIDUAL INITIAL ANTIVIRUS BEHAVIOR VS. PEER BEHAVIOR BASED ON PROXIMITY AND SMS . . . . . . . . . . . . 125 5.7 SURVEY RESPONSES 5.8 SUMMARY OF USAGE OVER TIME FROM FEBRUARY TO MAY

2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.9 AVERAGE USAGE PER WEEK CATEGORIZED BY INTERVENTION RESPONSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 . . . . . . . . . . 122 . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.10 GENDER VS. BEHAVIOR CHANGE AS OBSERVED DURING INTERVENTION STUDY AND 8 MONTH FOLLOW UP . . . . . . . 151

5.11 INDIVIDUAL CHANGE IN BEHAVIOR VS. PEER CHANGE BEHAVIOR BASED ON PROXIMITY AND SMS . . . . . . . . . . . . 154 x CHAPTER 1

INTRODUCTION Security is a growing concern associated with the exponential growth in technology used to connect people and systems across the globe. Many technical security

solutions are developed to address vulnerabilities in computer systems, however such

solutions often fall short in preventing all attacks on a system. Many times, the

weakness is due to the fact that humans must interact with these systems. Users of

a system may not fully comprehend the complexities and vulnerabilities associated

with a system resulting in human error that endangers the security of the entire

system. Awareness campaigns are often times employed to raise awareness among

users in order to fortify the weak human link. While awareness campaigns are readily being adopted, little is known about the effectiveness of these security awareness

campaigns. This dissertation sets out to explore how effective existing techniques are

at changing user behavior and also what factors may play a role in user decisions

related to awareness messages. 1.1 The Need for Security

Over the past two decades, society has had a growing dependence on technology

which has transformed the globe. People are undergoing a degree of change not

seen since the industrial revolution. Everyone is interconnected in real-time and has

access to numerous channels of information. Additionally, people produce and share

information in many new ways. Hospitals are moving towards using electronic health

records. Utilities are connecting plants to the grid. The advent of internet connected

1 appliances is bringing ever expanding types of data and services onto the internet for

people to access. Increasingly companies are moving to e-commerce to supplement

or replace brick and mortar stores.

As the speed at which technology changes increases, so to does the amount of

sensitive information stored within the systems. Less than six years ago, Google

Street View [8] was released, which allowed anyone with an internet connection to

virtually visit the majority of streets within the U.S. Two decades ago, individuals did

not need to worry about the ability of a stranger to view their house from the internet.

Street View is an example of technology growing faster than policies can keep up.

Along side the increase in use of technology is an increase in attacks. Companies

online and offline are losing credit card information [9]. For example, in 2013 Target

lost 40 million records of customer information including phone numbers, credit card

numbers, and other sensitive information. Additionally, websites saw an increase in

denial of service attacks, with the first two months of 2014 witnessing the largest

denial of service attack ever [10] in which attackers were able to direct 200-400Gbps

of attack traffic towards victims. The need for securing digital systems is greater now

than ever before.

Not only are people using more traditional computing devices (e.g. Desktops or

laptops) to interact with the digital world, but have moved to carrying mobile devices

everywhere with them. In 2013, over a half a billion new mobile devices were added

to the globe [11]. The number of mobile connected devices will exceed the world’s

population by 2014 [11]. The switch to using mobile devices is resulting in an increase

in the amount of sensitive data contained on the devices and an enlarged attack

surface. Mobile devices collect and share information about how users interact with

both the digital world (e.g. browsing history) as well as the physical world (e.g. gps

location, camera, microphone). Additionally, the plethora of information available on

mobile devices is collected and shared with service providers, application developers, 2 and third-party advertising companies.

From an organizational perspective, the increased risk is two-fold. First, with

many users personally owning a variety of capable mobile devices, considerable pressure emerges from employees to have their organizations embrace BYOD (Bring Your

Own Device) policies. Second, the perceived potential for productivity gains offered

by capable mobile devices is appealing to the organization but tempered by the risks

of exposing sensitive data. According to [12], 73% of companies now have a mix of

company and employee owned mobile devices. However, only 48% had implemented

security measures to protect mobile devices and 21% had no plans to implement such

measures in the future.

Although specific case studies involving BYOD have demonstrated cost savings

approaching nearly half of monthly service costs [13], an article in Technology Review

cast significant doubts on the overall savings of BYOD [14]. According to the article,

companies such as IBM are seeing potential savings in service costs by BYOD entirely

eroded if not surpassed by related support costs. Central to those support costs is the

issue of risk mitigation, namely, how can an organization ensure that various mobile

apps or actions by the mobile employee are not exposing sensitive information? With

a company-owned device, such policies can be strictly enforced [15]. Unfortunately,

the diverse array of smart mobile devices and the resulting interplay arising from

employee roles and privileges makes enforcement on BYOD decidedly non-trivial

[16, 17]. 1.2 Raising Awareness

Many people have identified the need for raising awareness of security threats

among workforce populations. In fact, the SANS Institute has put together the

“Securing The Human” set of resources [18] which claims to provide resources to

develop an “engaging, high-impact security awareness program”. Such programs are

3 designed to help companies build a culture of security within their organizations. Such

methods include using computer based training programs, posters, e-mails, etc in

order to help users identify a threat and know how to respond appropriately. However,

such awareness campaigns are difficult to evaluate. Many organizations may ask if

people saw certain messages posted in different areas of the company. This process

would help identify exposure to a message, but not necessarily the effectiveness of the

message itself. Additionally, organizations may compare levels of attacks both before

and after deployment of awareness campaigns. This is highly dependent on many

complex factors and does not offer much insight into the effectiveness of security

awareness methods. Finally, human behavior is complex with many theories within

psychology literature describing how and why individuals behave in the ways they

do. This thesis aims to draw on the findings from psychology literature to improve

upon existing awareness techniques.

One concern with the rapidly growing adop…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

User Behaviors in workplace on email secrecy.pdf

10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2014) Role-Playing Game for Studying User Behaviors in

Security: A Case Study on Email Secrecy

Kui Xu, Danfeng (Daphne) Yao, Manuel A. P´erez-Qui˜nones, Casey Link E. Scott Geller Department of Computer Science

Virginia Tech

Email: {xmenxk, danfeng, perez}@cs.vt.edu

ctlink@vt.edu Department of Psychology

Center for Applied Behavior Systems

Virginia Tech

Email: esgeller@vt.edu

knowledge with respect to a sensitive attribute. These factors

include i) properties of the interaction and relation between the

adversary and the target directly or indirectly via third parties,

ii) properties of the sensitive attribute, and iii) any public

available information regarding the target. Our experimental

evaluation is performed in the context of a question-based

authentication system, where we evaluate one’s ability to

answer the challenge questions of others. Abstract—Understanding the capabilities of adversaries (e.g.,

how much the adversary knows about a target) is important

for building strong security defenses. Computing an adversary’s

knowledge about a target requires new modeling techniques

and experimental methods. Our work describes a quantitative

analysis technique for modeling an adversary’s knowledge about

private information at workplace. Our technical enabler is a

new emulation environment for conducting user experiments on

attack behaviors. We develop a role-playing cyber game for our

evaluation, where the participants take on the adversary role to

launch ID theft attacks by answering challenge questions about a

target. We measure an adversary’s knowledge based on how well

he or she answers the authentication questions about a target.

We present our empirical modeling results based on the data

collected from a total of 36 users. I. There are many types of adversaries. An adversary may be

a stranger, an acquaintance, a colleague, a relative, or a close

friend of a target. The adversary may be a hardened career

criminal, a novice hacker, a disgruntled employee, or a cyber

spy. The privacy threat and analysis may be customized under

different adversary models. Without loss of generality, we

present our design, model, and evaluation under a university

environment. Our work analyzes the privacy threat posed by

known acquaintances of a target. Our methodology applies to

the analysis of other adversary models. I NTRODUCTION The ability to realistically model how much the attackers

know about a target is useful. It helps predict privacy and

security threats from known or unknown adversaries, which

in turn facilitates the protection of confidential information.

Specifically, it is desirable for one, say T , to analyze how

much others including T ’s friends know about T ’s personal

data, i.e., T asks “How much do others know about me?”. To

describe this problem more formally, given the target T , an

adversary A, the history of interactions between T and A, and

a sensitive piece of information d ∈ P about T from a finite

space P, we define guessability as the likelihood of adversary

A knowing d about the target T . Solving this problem can help

one model and assess security and privacy threats. For our experiments, we develop a new role-playing game

system that is a technical enabler for realizing our goals. The

game system automatically generates challenge questions from

a target’s private activities. Players of the game system are

asked to impersonate the target by answering the questions

related to the target. This role-playing game provides a testbed

for studying attack behaviors in the cyberspace.

In our user study, we collected 1,536 user responses and

associated 3,072 behavior data points from experiments. Our

results reveal a 41.4% average success rate when a player is

asked to answer the multiple choice privacy questions of a

target in a university setting. We found that the duration of

relation and communication frequency between the target and

the player are strong predictors. This issue – referred to by us as the adversary’s knowledge

problem – has not been addressed in the literature. There

are studies on new knowledge that an adversary may gain

about a target by inferring from publicly available data [1] or

from online social networks [2]. In data publishing privacy,

substantial amount of research has been on modeling and

sanitizing data according to a varying degree of adversaries’

knowledge [3], [4], [5], [6], [7], [8]. However, these solutions

are not designed to address the guessability problem.

In our work, we measure an adversary’s knowledge by how

well he or she answers the authentication questions about a

target. We quantitatively analyze factors that affect adversary’s The private information in our game system is based on a

target’s email messages. Email messages are usually accessible

only by the owner, and thus it is reasonable to consider them as

private between the sender and the receiver. We automatically

generate challenge questions based on email contacts, subjects,

or contents. Our experiments measure how well others know

about the email activities of a target. All email messages

contributed by participants are properly sanitized by their

owners to remove possible sensitive information. This work has been supported in part by NSF grant CAREER CNS0953638, ONR grant N00014-13-1-0016, ARO YIP W911NF-14-1-0535,

ICTAS and HUME Center of Virginia Tech. Our analysis is based on the data from 36 participants in our

experiment, which might affect the accuracy of experimental

findings. Conducting user studies or experiments involving

18 978-1-63190-043-3 © 2014 ICST

DOI 10.4108/icst.collaboratecom.2014.257242 Shannon’s entropy [19], [20], [21] has been widely used

in many disciplines, such as sensor networks [22], cryptography [23], and preference-based authentication [24]. Our

quantifying activity fundamentally differs from the analysis by

Jakobbson, Yang, and Wetzel on quantifying preferences [24],

because of the diversity and dynamic-nature of personal activities in our model. Unlike [24], email-based challenges do not

require users’ to pre-select questions and setup answers. private and sensitive information has always been challenging.

Despite the relatively small sample size, our work is the first

step towards addressing the important problem of quantitative

modeling of adversary’s knowledge and our methodology

based on the role-playing game is new.

II. R ELATED W ORK Existing research on understanding offensive behaviors

in cyberspace is typically conducted through surveys, for

example, on cyber-bullying [9] and on the likelihood of selfreporting crimes [10]. Scam victims’ behaviors were analyzed

in [11], where the scams studied are mostly from the physical

worlds. In comparison, we design a role-playing attack game

for analyzing cyber-security behaviors. Our work is different from the existing work [25] that

uses entropy for quantifying knowledge-based authentication,

in terms of goals and approaches. For example, Chen and

Liginlal proposed a Bayesian network model for aggregating

user’s responses of multiple authentication challenges to infer

the final authentication decision [25]. They also described a

method for strategically selecting features (or attributes) for

authentication with entropy [26]. Both pieces of work were

validated with simulated data. Our work aims to predict the

guessability with respect of an attacker’s prior knowledge. We

perform experimental validation with real-world data. Currently, security-related games are mainly designed for

education purposes, including one based on the popular multiplayer online game Second Life [12]. We use game systems to

conduct research relevant to cyber security. Our systems can

also be used to educate users about important cyber-security

concepts. There have been continuous research advances in the field

of authentication systems and their usability [27]. Our work is

not to propose a new authentication method, rather we develop

a general methodology for modeling adversarys knowledge.

Authentication is used as an experimental evaluation tool to

demonstrate our approach. There exist many research solutions

on new authentication systems and their security evaluation

(e.g., [28], [29], [30], [31], [32]). A conventional questionbased authentication is usually used as a secondary authentication mechanism in a web site, when the user tries to reset

a forgotten password. We adopt the email-based challenges

proposed in [33], which conveniently allows us to perform

accurate and specialized data collection, categorization, and

quantitative measures on the data and attributes. The security of authentication questions is also experimentally measured in the work described in [13]. Although

with different goals, as a comparison, the experiment in [13]

revealed that acquaintances with whom participants reported

being unwilling to share their webmail passwords were able

to guess 17% of their answers. And those who were trusted by

their partners were able to guess their partners’ answers 28%

of the time. The numbers are lower than what we get using

questions in the form of multiple choice questions.

The increasing use of online social networks also causes

privacy issues, and sensitive information is usually either

publicly provided or uploaded by other people or friends [14],

[15]. Authors in [1] showed that, with a small piece of seed

information, attackers can search local database or query web

search engine, to launch re-identification attacks and crossdatabase aggregation. Their simulated result shows that large

portions of users with online presence are very identifiable.

The work in [16] used a leakage measurement to quantify

the information available online about a given user. By crawling and aggregating data from popular social networks, the

analysis showed a high percentage of privacy leakage from

the online social footprints, and discussed the susceptibility to

attacks on physical identification and password recovery. Using

social networks as a side-channel, the authors in [17] are able

to deanonymize location traces. The contact graph identifying

meetings between anonymized users can be structurally correlated with a social network graph, and thereby identifying

80% of anonymized users precisely. In comparison, our work

studies the privacy leak within an organization. Similar to our work where email activities are used to

generate challenge questions and evaluate adversary knowledge, applying user activities for security purposes has been

researched in previous work [34], [35], [36]. User behaviors

have been used for detecting illegal file downloads [34],

discovering abnormal network traffic [35], and identifying

malicious mobile apps [36].

III. S YSTEM D ESIGN We design a role-playing game system to provide a controlled and monitored environment for the players to perform

the impersonation attacks against targets. We describe our

design and implementation of the game system in this section.

Using this system, our user study in Section V measures the

guessability of personal and work email records of targets by

known or unknown individuals. These individuals play the role

of adversaries in this emulated ID theft scenarios in the user

study. In personal information management, the work in [18]

used a memory questionnaire to study what people remember about their email. They found out that the most salient

attributes were the topic of the message and the reason for

the email. People demonstrated good abilities to refind their

messages in email. In the majority of tasks, they remembered

multiple attributes. These findings help support our approach

to use email (or other personal information) as a source of

information for generating authentication questions. A. Overview

We define a target T as the individual whose identity is

being attacked, that is, a player whose challenge questions

are guessed by adversaries A. A player aims to impersonate

the target through answering or guessing the challenges. The

player may know the target or may be a complete stranger to

the target. The player is referred to us as the adversary.

19 Our evaluation can utilize any question-based authentication system. Conventional authentication questions are usually based on historic personal data and events (e.g., names

of hometown and school). However, we choose not to use

these conventional challenges due to two reasons, privacy

and scalability. First, these types of sensitive data are used

in the real world for secondary authentication; revealing it

during experimental evaluation compromises the privacy of

participants. Second, collecting personal data of participants

requires manual efforts, which is not scalable. B. Challenge Questions

We automatically generate four types of challenge questions asking about various attributes of a target’s email messages. Examples are shown below. Our challenge questions are generated from email messages of targets. Using emails as the data source of private

information offers several advantages.

1) 2) 3) Email activities are dynamic and change with time,

which fundamentally differ from personal facts such

as mother’s maiden name. Email allows us to evaluate

the impact of the dynamic private data on adversaries’

knowledge.

From a system designer’s perspective, an email system allows us to completely automate operations of

data retrieval, attribute extraction, challenge questions

generation, and verification of user responses. We

write client-side scripts utilizing email server APIs

for these tasks. Email servers and email messages

share the communication protocols, APIs, and data

formats, which adds to the compatibility and scalability.

One-to-one email communication is private and suitable for our privacy evaluation. It provides a rich

context and semantics for personal information. The

information is not used by online commercial systems

for real-world authentication. • FromWhom: From whom did Professor A receive the

email with subject ’Agenda for Dr. X’s visit.’ on 201103-16T14:59? • SentWhom: To whom did Professor B send the email

on 2011-08-18T21:21 with subject ’Re: GraceHopper

2011’? • FromSubject: What is the subject of the email to

Professor C from Y on 2011-06-17T13:23? • SentSubject: What is the subject of the email Professor

D sent to Z on Wed, Oct 5, 2011 at 5:10 PM? A challenge question is asked in the form of multiple

choices with 5 choices. Questions have wrong answers in

the choices. Wrong choices are automatically generated from

random email messages of the target. A question may contain

a None of the above. choice with a pre-defined probability.

C. Overview of Game Procedure

A player logs in our server with a password through a secure HTTPS connection. Our game server hosts the challenge

questions. 1

The player selects targets to attack and answers a total of 48

challenge questions. The questions associated with the selected

target are retrieved from our backend MySQL database and

shown to the player in a browser. All the questions are in the

form of multiple choice questions.

During the game, the player is allowed to use Internet.

Upon submission, the player’s answers are stored by the server.

The server compares the submitted answers with the correct

answers stored in the database, and computes the player’s

performance. The game system has the following components: i) email

retrieval for retrieving email messages of targets, ii) question generation for parsing email messages and generating

multiple-choice questions, iii) user interface, iv) web hosting

for online participation and v) database storage for storing

users’ responses. Our game rules allow adversaries to search

the Internet for clues and hints. Using email activities for

challenge questions is desirable because of its rich context and

archival nature. Our design generates email-based questions by

leveraging the existing stored data of a user on the mail server. IV. S OURCES OF A DVERSARY ’ S K NOWLEDGE We categorize the factors that contribute to the leak of

private information (e.g., entropy of the corresponding random

variables, social relation, and interaction). We then design

quantitative measurements for each of these factors, and compute their significance in predicting an adversary’s knowledge. Our design minimizes the interaction between the game

server and the mail servers. We perform a one-time data

transfer operation to fetch mail records of targets with proper

permission and data sanitization. The corpus data is stored

and analyzed by us securely for generating challenges and

verifying answers. There is no subsequent interaction with the

mail server. In this one-time data-transfer operation, we collect

mail records, including Inbox, Sent, and local folders. Only

during this data transfer, the participating target is required

to enter his or her password to access the mail records on

the mail server. We use JavaMail for fetching and parsing

email messages. Parsing the emails allows us to extract the

information such as sender/receiver, email title, timestamp and

also email message data. The class IMAPSSLStore is used,

which provides access to an IMAP message store over SSL.

(The game server is different from the email server.) Public information available from the Internet and public

records is a common source for gaining knowledge about a

target. How much knowledge about a target can be gained

merely from the publicly available information on the Internet

was analyzed by Yang et al in [1]. That study is particularly suitable for analyzing background knowledge of stranger

adversaries. In contrast, our work is focused on two other

factors contributing to the guessability analysis, namely data

regularity, and interaction, which are described next. These

factors may not be independent of each other.

1 Our 20 Data regularity: the regularity or predictability of the

target’s activities, profiles, or persona. This factor is

implementation is based on Restlet Java web server. complete understanding about the target, both direct

and indirect. determined by the characteristics of the target and

the attribute being challenged. This factor is related

to the difficulty of the challenge question. We define

an activity or event to have one or more attributes

describing properties of the activity. We view an

attribute as a random variable that may take a number

of possible outcomes. An activity may be Alice sending an email message, and its attributes may include

sender, receiver, timestamp, subject of the email, and

attachment of the email.

A regular event or a regular activity (e.g., the dinner

location is usually at one’s home) is easier to guess

than a frequently changing event (e.g., the last person

to whom you sent an email). We use entropy to

summarize the regularity of events in our evaluation.

• • There are various methods for quantifying these factors and

integrating them to assess the adversary’s knowledge. We perform regression analysis based on our quantified factor values.

The resulting model can be used to assess the knowledge of

either a specific individual or types of individuals.

Our results shown in Section V found that the duration of

relation and frequency of communication are strong predictors

of adversary’s guessability in our model. These factors may

be integrated with the public information factor during the

analysis. The accuracy of modeling may highly depend on

the completeness and accuracy of the information used in the

analysis. Direct or indirect relation and interaction: the interaction and relation between the parties and their

personal or workplace social network. This factor

aims at capturing the dynamics between the parties

in order to analyze the flow of private information.

For a stranger adversary, this factor may provide no

information in the analysis due to the lack of available

data.

The target and the adversary may have direct or

indirect social connections, so their relation and communication are important factors that can be used

to estimate the knowledge of an adversary about

the target. If the adversary is from the target’s personal or professional social networks (e.g., relatives,

colleagues, friends), the adversary has background

knowledge about the target, which makes guessing

easier.

A factor in modeling the adversary’s knowledge is the

social relations and interactions between the adversary

and the target. The relation and interaction may be

direct or indirect through third parties. We hypothesize

that close individuals or two individuals with overlapping social networks may indicate a high degree of

background knowledge about each other.

This interaction factor may be further categorized

into two classes: i) static social relation and ii)

dynamic interaction. The former refers to relations

such as advisor-advisee, instructor-student, parentchild, friend, or colleagues. For each relationship,

the dynamic interaction (e.g., duration of relation,

communication patterns) between the involved parties

provide more fine-grained information and description

for our analysis.

To completely gather these social interactions is challenging, if not impossible, e.g., water cooler conversations are difficult to systematically record and analyze.

For our experimental demonstration, we choose to

analyze email records because of its archival nature. V. E XPERIMENTAL E VALUATION All our experiments involving human subjects have been

conducted under proper IRB approvals and are compliant to

IRB regulations. We gave extra caution to protect the data

security. There are two roles in our experiment: target from

whose email messages questions are generated, and player

(i. e., attacker) who guesses the questions from the target.

The player is allowed to use the Internet. Targets are all

professors in a university. They contributed their sanitized

email content through an automatic procedure. We assume that

email messages are private between the sender and receiver,

and contain personal and work-related information.

A. Experimental Setup

We generate 24 challenge questions from each target’s

email records. The questions are sanitized by the target. 12

questions are based on the sender or receiver (referred to as

SentWhom and FromWhom). 12 questions are based on email

subjects (referred to as SentSubject and FromSubject). We only

process email headers, and the content of email messages is

not kept or used.

Email header can be considered as the abstract of an email

message and contains different kinds of private information

which is not limited to the form of emails. It also allows

easy and automatic information processing for experimental

question generation. Richer information can be extracted from

email contents, with advanced natural language processing and

more strict sanitization. Our experimental approach can be

generalized to use other sources of personal information as

well.

We consider a stronger adversary model compared to

complete strangers acting as attackers (e.g., as in the analysis

done in [1]). The attackers could be acquaintances of their

targets. To simulate such situation, we recruited students of

the targets as players, including undergraduate and graduate

students within the same university. Some of the students

may or may have worked with the targets, so the adversaries

(players) in our model may have more access to their target

for gaining knowledge about the challenge questions. Collusion among adversaries: the collusion among

adversaries is the case that multiple adversaries collaborate in figuring out one same target’s private

information. The share of knowledge has a big impact in the total amount of information adversaries

can obtain by teaming up with each other. Different

people know the target from different aspects, and by

putting knowledge together, adversaries have a more It’s possible that the adversary may be partly involved in

some email messages with the target. However, the chance is

low considering the total number of email messages each target

has. Some targets provide the email messages in the Inbox or

21 TABLE II. Sent folder for experiment, while others choose to provide the

email messages in a few organized folders, so the timespan of

the messages collected from each target varis, from months to

years. “`

Relation

“`

Target

Prof. A

Prof. B

Prof. C

Prof. D

Avg. Correct

Std. Error

Correct % We give players performance-based incentive cash rewards,

i.e., the amount of their rewards depends on the number of

correct answers. Each player answers questions about two

targets (48 total). We also collect and analyze behavioral data.

The behavior data includes i) the duration of knowing the target

and ii) the player’s confidence about his or her answer. Table I

summarizes the experimental setup.

TABLE I.

Target

4 Auth. question

1,536 Behavior question

3,072 TYPE OF RELATIONS . With…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

Using Security Logs to Identify and Manage User Behaviour to Enhance Information Security.pdf

Using Security Logs to Identify and Manage User Behaviour to

Enhance Information Security

Rose Hunt and Stephen Hill

University of Wolverhampton, UK

r.hunt2@wlv.ac.uk

Stephen.Hill@wlv.ac.uk

Abstract: This paper describes a study which seeks to evaluate the relationship between user behaviour, including the use

of social technologies within the workplace, and the prevalence of malware infections routinely detected on devices. The

study’s initial focus is the extent to which security breaches are linked to the use by staff of social technologies, namely Social

Media, at work. It is a study which affirms previous research showing that Social Media use at work does present significant

security risks. It provides a possible basis for research into the management and change of user behaviour with reference to

security management, and would be of interest to Cyber and Information Security professionals and researchers in the field.

The context is a large university where network security is achieved through the separation into two separate domains of

the staff and student networks. The scope of this study focusses solely on staff behaviour, for reasons which include the very

high numbers of students, and the fact that the student population is much more short-term and transient and is therefore

not so appropriate for a longitudinal study. Daily automated logs were collected from a number of data sources including

anti-virus data from F-Secure security software and web activity data from Palo Alto firewall logs.These logs were examined

and a suitable data collection method was implemented which provided a successful combination of volume, manageability

and processing, and delivered satisfactory performance whilst retaining data accuracy. Once collected, processed and stored,

the user characteristic data derived from the logs was then analysed. Data mining and pattern recognition techniques were

used, with the Kohonen Self-Organising map used as a model for this analysis. Neural network data analysis tools within

Matlab were used to process the inputs, and data clustering became evident within the presented data. Findings showed

that social Media use increases users’ susceptibility to the introduction of malware infections. The most frequently

introduced malware types found in our study were trojans, but using Social Media also heightened the risk of introducing a

variety of other malware. Other information was gathered which provided insight into the behaviour of different user types,

grouped by age, and sex, and this will provide an underpinning to planned further research which seeks to find ways of

managing user behaviour in relation to security breaches.

Keywords: information security, cyber security, user behaviour, social media, malware, threat vectors 1. Introduction

This paper describes an initial study to look at ways of investigating the impact of user behaviours on an

organisation’s vulnerability to security breaches. Data was collected on current device infections detected by

the anti-virus and malware protection tool FSecure during the period December 2013/ March 2014. This data

was correlated to user behaviours collected from the Palo Alto firewall for the same period, using neural network

analysis to identify trends and patterns of behaviour.

The study sought to test two key hypotheses, firstly that online Social Media is widely used by employees at

their place of work, and secondly that online Social Media is a major infection vector which assists malware

distribution across networksand installation on vulnerable user devices. In order to test these hypotheses, the

study analysed data within the institution, to both ascertain whether the use of such online media contributes

to increased malware infections, and to understand the common indicators in user behaviour profiles which

identify high risk online Social Media activity.

The use of online Social Media allows scammers to exploit vulnerabilities. Online Social Media is used to deliver

malware into an organisation in several ways, with the highest risks associated with users clicking on links of

videos or photos, sometimes delivered via bogus private messages The nature of the threat posed by Social

Media is complex, consisting as it does of technical attacks combined with sophisticated social engineering

techniques. (Trendmicro, 2013)

This study takes place within the context of a large, modern university, and examines the degree to which the

use of online Social Media by staff members compromises IT security. The use of such technologies by both staff

and students is widespread, and there are no plans to block this usage in any way, as many courses and staff

actively use Social Media in order to engage with students. Students respond positively to such use of Social 111 Rose Hunt and Stephen Hill

Media (DeAndrea, David et al, 2012) and Social Media use is considered, within this context, to be a useful tool

which can enhance students’ learning and engagement within the institution.

The use of Social Media for communication, sharing and engagement seems to have the potential to provide

benefits to staff, students and the organisation as a whole, improving communication and social interaction, and

adding flexibility to course delivery (Irwin, Ball et al, 2012). Social Media, defined by Trendmicro (2013) as ‘the

collective of online communications channels dedicated to community-based input, interaction, content-sharing

and collaboration’, encourages attributes such as participation, openness, conversation, community and

connectedness. (Mayfield, 2008). There are many different types of Social Media platforms, ranging from social

networking, blogging, publishing, video and livecasting, to gaming, virtual world and crowdsourcing applications.

Initial investigation shows that staff use of the internet and Social Media is different to student use in several

ways. Students tend to become victims of scammers or attacks by downloading media files, particularly music

and music videos, or through attacks delivered via chatrooms, whereas staff are less likely to download material,

but fall prey to malware delivered by browser exploits, email phishing attacks, and social engineering via Social

Media attack vectors. Staff access a separate intranet to students in the organisation, and may have

administration accounts for their computers which, if not carefully managed by the individual members of staff,

can allow malware to run on their devices, making their machines more vulnerable to exploits. 2. Initial research

The research methodology consisted of primary research which established user behaviour profiles based upon

device utilisation, category of web page visited and temporal and spatial elements which indicated online Social

Media access. Ethical issues were important when collecting data, as the collection of data relating to individual

Web behaviour on the internet contained confidential information. The research data was therefore

anonymised in order to protect the identity of the user and encrypted to protect confidentiality.

The Pawson (Pawson and Tilley, 1997) realist evaluation model was used in order to establish causality. This

relied upon the Context, Mechanism, and Outcome (CMO) model to provide an interpretative narrative of the

problem and identify suitable successful areas for investigation.

The main sources of the quantitative data were Syslogs, network logs which were generated automatically every

24 hours, and FSecure anti-virus logs which were generated when an infection was detected on a work

environment device. Initial data collection indicated an increasing diversification of technology within the

network and extensive use of online Social Media. It also indicated an exponential growth in user demand for

Bring Your Own Device (BYOD) supported technology, and a corresponding need for a robust and readily

accessible Wi-Fi network. The scale of the network requirement, and therefore the need for investigation, can

be appreciated when considering that network use statistics revealed that 2013 saw a 20% increase in observed

devices on the corporate network with a total of 100,000 individual devices connecting to the corporate network

in a 12 month period. Investigation indicated that not only was there an increase in device diversity but also an

increasing diversity in Social Media, cloud storage and professional networking use by staff

The collection of the quantitative data was achieved by utilizing a series of automated data collection processes

and tools and finally an online questionnaire to elicit user perceptions of online Social Media. FSecure anti-virus

software client data is collected continuously over a 24 hour period, which allows live data capture and provides

a high degree of data granularity based on user observed activity; this data allowed those individuals with a

higher than average incidence of malware infections to be identified. The behaviour of these identified

individuals from the research data set was examined further within a later phase of the study.

In order to establish user patterns of behaviour, the Palo Alto firewall log files were used to identify the Web

activity of the user. This is a different technique from the FSecure data samples in that Palo Alto categorises user

activity based on the type of application being used. (For example, Microsoft Outlook as email and Internet

Explorer as Web browsing.) These categories were then tracked to individual URL locations and placed into

observed behaviour categories; Facebook = Social Media, iPlayer = Streaming Video etc. This data was collected

on a daily basis, scheduled to run automatically every evening, thereby enabling online tracking of individual

Web behaviours (Hughes, 2010) and thereby allowing infection sources to be discretely timed and the infection

vector allocated to a unique type of Web activity. This enabled us to build a pattern of user activity for analysis, 112 Rose Hunt and Stephen Hill

and this provided the main quantitative primary data. Quantitative data was collected in three phases between

11/1/14 and the 13/3/14. 3. Preliminary data analysis

Initial data collection tests showed that the quality of the quantitative data collected was sufficient to be able

to categorise user behaviour by relating activity to online Social Media, commercial web visits and internet portal

services. This analytical information allowed a taxonomy of user behaviours to be constructed, and the model

subsequently tested against test data.

A series of data analysis preliminary tests carried out in Phase 1 showed that the analytical pattern recognition,

consisting of running consecutive epochs through the neural network, could effectively and efficiently categorize

inputs into sets of linear separability within a short timeframe, which gave acceptable performance and

accuracy. The data patterns that emerged identified those users who actively and frequently used online Social

Media within the work environment. The quantitative user behaviour was compared to the malware infection

data which was collected on a daily basis and held in the FSecure database. This comparative analysis started to

indicate that patterns of behaviour were discernible in the data provided and that it was possible to attempt

pattern recognition with the vector of data inputs.

Application of triangulation within this research project was used in order to reduce data measurement error

and improve construct validity.

The research project used quantitative data collection from the Palo Alto firewall data logs to enable data

collection to be focused on those users more than averagely susceptible to incidents of malware infection.After

collecting several hundred detailed user behaviour datasets, it was possible to construct a set of normalized data

elements within a vector; these samples formed the data input to the neural network. This vector of data

elements was then presented to a neural network for pattern recognition training and unsupervised learning

(Theodoridis et al, 2009). Quantitative data collection reached a peak of around of 350,000 data logs per day

during the course of the study. In order to manage and store data collected, a suitable database structure for

this type of data and the projected data volumes was created, using a SQL database. The data was collected

from all corporate Managed Devices. Managed Devices are defined as those devices (laptops, desktops) for

which IT Services assumes responsibility for deployment, management and network protection. The table holds

in excess of 500,650 individual records.

The Palo Alto tool was configured to monitor and log user activity and provide behavioural information based

on Web activity and application type. A customized report was prepared to identify characteristics of user

behaviour which were linearly separate and these were scheduled to automatically run every evening. By

combining the FSecure data with the Palo Alto data it was possible to identify suitable data subjects and a

behavioural characteristics list. This master list was reduced to a smaller sample list of 25 elements. This smaller

list retained the distinct identifiers necessary for pattern recognition whilst allowing faster pattern recognition

without any perceivable loss in accuracy. The research team intend to apply data mining techniques to further

stages of the research, in order to improve manageability of the high levels of data. 4. Identification of high-risk users

Identification of users who displayed suitable attributes relied upon the selection ofaccounts belonging to staff

who appear more prone to repeated malware infections than the statistical average. Fig 1 below indicates that

an infection frequency rate above 22 incidents would indicate high infection rates based on malware infection

frequency.

FSecure network logs provided the following data: timestamp (date/time) of the reported incident, IP address

of the device, infection definition and where on the device the infection was located, and an SQL database

provided the tools and techniques required to quickly identify high infection rates against individual user

accounts – particularly Trojan infections, as these rely upon user interaction for download and installation, and

therefore allow a closer study of user behaviour. From the data extracted from the FSecure logs nine general

categories of virus/malware infections were identified: Trojan, Java exploit, General infection (generic threat),

Adware, Exploit of HTML vulnerabilities, Worm infections, Backdoor exploits, Dialler malware and W32 general

exploits. 113 Rose Hunt and Stephen Hill Figure 1: Malware infection frequency

Figure 2 details the infection rates of staff devices within the network for the period December 2012 – February

2013. The exclusion of ‘General’, virus identified but not categorised (Total 15349) and Trojan infections (Total

4227) allowed a clearer picture to emerge of infection types and frequency. Trojans were excluded because they

are frequently the attack vector for delivery of different types of malware. Figure 2: Infection rates of staff devices

The main objective was to identify patterns of behaviour which placed the user at greater risk of infections and

to be able to describe these behaviours in order to develop user profiles for incident response when an infection

outbreak occurs. 5. Data processing for the neural network

The user characteristic data was pre-processed based on the Khan Model (Khan, 2008) of data pre-processing

for neural networks. This produced a matrix of activity values which were normalised between the values of 01. Normalisation was then used as a tool in order to prevent outweighing characteristics, which possess larger

range, from overwhelming the patterns present. Several techniques are available for normalisation; min-max, zscore and decimal scaling are all suitable. The min-max method (Khan, 2008) was used in this instance as it

provided a simple and efficiently applied formula, suitable for large data sets.

The resulting vectors were then presented to a Matlab Neural Network package in order to output a Kohonen

SOM model (Kohonen, 1988) for pattern recognition. Pattern recognition using a neural network was selected

as a suitable technique, as evidence suggests that the technique is well suited to pattern recognition in large

data sets (Kohonen, 1988). The model used for this analysis was based upon the Kohonen Self-organising map,

using machine learning to train the software toto recognise groups of similar input vectors in such a way that

neurons physically near each other respond to similar inputs.

This technique creates a Self-organising Map (SOM) (Kohonen, 1988) which classifies inputs into patterns Figure

3 shows the resulting map obtained from the Phase 1 data sets. The map clearly shows clustering of behaviours

with clear separation between patterns, indicated by the darker colours between clusters. 114 Rose Hunt and Stephen Hill Figure 3: SOM Phase 1 analysis

The neural network model consisted of an input of 53 samples with 25 elements; presented to 10 hidden layers

within the neural network and producing 100 output elements. Within the basic neural network model used for

pattern recognition the mathematical formula for clustering the input vectors is:

݅ ‫ܰ א‬௜‫ כ‬ሺ݀ሻ

This formula allows the neurons in the SOM to be adjusted as follows:

݅ ௪ ሺ‫ݍ‬ሻ ൌ ሺͳ െ ߙሻ݅ ௪ ሺ‫ ݍ‬െ ͳሻ ൅ ߙ‫݌‬ሺ‫ݍ‬ሻ

Here the neighbourhood ܰ௜‫ כ‬ሺ݀ሻ contains the indices for all the neurons that lie within a radius of d of the winning

neuron i*.

A SOM Pattern Recognition Layers model was developed, which gave a framework, the Kohonen Map, which is

recognised as a suitable method for rapidly identifying patterns, even when the input data has omissions or is

corrupted during collection or analysis. This makes the Kohonen Map an extremely robust pattern recognition

technique. ( Kohonen, T., 1990) Figure 4: Kohonen map – model structure

The Kohonen Self Organising Map indicated that the outputs from the neural network had recognised a number

of patterns in user behaviour, and these now needed to be applied to the data in order to identify what

behaviours these could be identified from these patterns. The neural network output represented a number of

distinct patterns which were coded from A –AN, and then matched to the observed User behaviours. The output

data patterns were then fed into Gephi Graphic Visualisation and Modification software. This process allowed

the patterns to be mapped as relationships between nodes. The nodes represented individual users, infections

and pattern codes 115 Rose Hunt and Stephen Hill Figure 5: Pattern analysis graphic – phase 1

Pattern recognition was compared between Phase 2 and Phase 3, and consistency was shown between the

different data from the two phases, showing that pattern groupings can be successfully applied to new data and

will successfully still categorise behaviours into recognisable patterns of characteristics. 6. General findings

Initial findings showed the infection pattern on the high risk group staff machines (see Figure 6 below), which

indicates that Social Media applications are implicated in 23.68% of the reported infections. Of this 23.68%

Adware was detected 55% of the time. These findings also showed that certain user behaviours are more closely

associated with Adware (Trojan) infections, commonly Shopping, Social Media and Web Enabled Mail Web

traffic. Repeated online Social Media use is associated with 24% (rounded up) of computer infections, which

relates to a 55% infection by adware, but a 34% chance of downloading a Trojan infection. Figure 6: Computer infections by incident and volume

In order to broaden the research model an online Social Media user survey was undertaken with the high risk

users with a history of above average malware/virus incidents reported on their devices. Respondents to the

survey showed a distinct variation in age, gender and work experience. (See Figure 7 below.) 116 Rose Hunt and Stephen Hill Figure 7: User profiles, social media survey

The data was collected from those members of staff who displayed a higher than normal infection rate on staff

devices. Therefore all members of the observation group had already demonstrated a high level of malware

infections, and were a high risk group. Survey questions were largely based on the research of DiMicco et al

(2008) on user motivation. 6.1 Survey findings

Findings within the high risk group were as follows:

ƒ Males under 31 tended to have a higher level of Social Media use whilst at work. In addition, the data shows

that males are almost exclusively the main users of streaming media whilst at work. ƒ Female online Social Media use at work showed a marked increase in volume between the ages of 31-45

yrs. The data also indicated that this group were more disposed to online shopping at work than male

colleagues. ƒ High risk female respondents were generally younger in age and had less work experience than their male

counterparts. Age distribution did not replicate the findings of Leftheriotis (2013), and more research needs

to be done to validate the initial findings. 7. Homework help – Discussion and conclusions

In our study, we found that Internet and online Social Media behaviours vary between different age groups and

genders, and that different groups are likely to be the victims of a different pattern of infection vectors.

Although Web Based Email is the greatest infection vector by incident, Social Media use attracts a far higher

incidence of Trojan infections. This corresponds to the perception that Social Media poses threats through social

engineering and opening up other attack vectors through user behaviour. Several risks associated with online

Social Media use appear to stem from Web browser vulnerabilities; this assertion is validated by the findings of

Abrams et al (2014) in their recent vulnerability assessment.

The use of Social Media is a sophisticated attack vector which is intended to lure the user into running Trojan

malware, thereby providing a persistent threat which can be loaded with different malware infections when

required. Several research projects such as Thomas and Nicol’s work on the Koobface botnet and the rise of

social malware (Thomas and Nicol, 2010), have identified a lower level of threat perception by users when using

Social Media, and the higher level of Trojan infections our study detected from the Social Media group probably

supports this. They also found that Social Media security measures only identify 27% of threats and take at least

4 days to address the security issue, which implies that large numbers of social network users are vulnerable.

The results we obtained were notunexpected, but the research demonstrates how, by using a readily available

data source, organisations can obtain information which can support security measures within an organisation.

Research done by Bohme and Grossklags (2011) shows that users’ attention is a finite resource, so security

measures need to be designed accordingly. It is important that security initiatives are targetted and effective,

and identifying user characteristics and behaviours could help Security managers to create robust security

initiatives, training and policy. Understanding the user is the first step towards developing effective security 117 Rose Hunt and Stephen Hill

initiatives. If we can identify the most at risk users, and target training towards those users, we may be able to

make training and education, traditionally so difficult within a security context, more effective. Finding ways to

apply the rich data held within security logs could provide the information necessary to do this..In addition, the

research raises many other questions.The research seems to show clearly that behaviour differs between

gender, for example, but further work needs to be done to check the accuracy of the results. There is a need to

explore why different user behaviours are apparent between genders, but also to look at the study in terms of

age, type of job done (do staff in the Computing department behave differently to staff in the Languages

department, for example), and length of service. Other factors could be investigated as well. Some examples of

questions we might ask include whether user behaviour differs if the user works alone, whether there are

particular times of the day when users are most likely to access more risky websites, whether user behaviour

changes according to the time of year, or whether they are going on leave in the near future, and whether

different levels of loyalty to their employer or attitudes affect their behaviour.

The next phase of this study will use data mining to manage log data and analysis so that high risk users can be

identified more easily. The study will also identify different training and ed…Read more

This is the end of the preview. Download to see the full text

ATTACHMENT PREVIEW

Download attachment

WHAT INFLUENCES INFORMATION SECURITY BEHAVIOR A STUDY WITH BRAZILIAN USERS.pdf

JISTEM – Journal of Information Systems and Technology Management

Revista de Gestão da Tecnologia e Sistemas de Informação

Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496

ISSN online: 1807-1775

DOI: 10.4301/S1807-17752016000300007 WHAT INFLUENCES INFORMATION SECURITY BEHAVIOR? A

STUDY WITH BRAZILIAN USERS

Rodrigo Hickmann Klein

Pontifícia Universidade Católica do Rio Grande do Sul, Rio Grande do Sul, Brasil

Edimara Mezzomo Luciano

Programa de Pós-Graduação em Administração Pontifícia Universidade Católica do Rio

Grande do Sul, Rio Grande do Sul, Brasil

______________________________________________________________________

ABSTRACT

The popularization of software to mitigate Information Security threats can

produce an exaggerated notion about its full effectiveness in the elimination of

any threat. This situation can result reckless users behavior, increasing

vulnerability. Based on behavioral theories, a theoretical model and hypotheses

were developed to understand the extent to which human perception of threat,

control and disgruntlement can induce responsible behavior. A self-administered

questionnaire was created and validated. The data were collected in Brazil, and

complementary results regarding similar studies conducted in USA were found.

The results show that there is an influence of information security orientations

provided by organizations in the perception about severity of the threat. The

relationship between threat, effort, control and disgruntlement, and the

responsible behavior towards information security was verified through linear

regression. The results also point out the significant influence of the analyzed

construct on Safe Behavior. The contributions involve relatively new concepts

in the field and a new research instrument as well. For the practitioners, this

study highlights the importance of Perceived Severity and Perceived

Susceptibility in the formulation of the content of Information Security

awareness guidelines within organizations. Moreover, users’ disgruntlement

with the organization, colleagues or superiors is a factor to be considered in the

awareness programs.

Keywords: Information Security; Safe Behavior; Users’ behavior; Brazilian

users; threats

____________________________________________________________________________________

Manuscript first received/Recebido em: 26/07/2015 Manuscript accepted/Aprovado em: 07/12/2016

Address for correspondence / Endereço para correspondência

Rodrigo Hickmann Klein, Pontifícia Universidade Católica do Rio Grande do Sul, Rio Grande do Sul,

Brasil Mestre e Doutorando em Administração, Programa de Pós-Graduação em Administração

Pontifícia Universidade Católica do Rio Grande do Sul E-mail: rodrigo.hickmann@acad.pucrs.br

Edimara Mezzomo Luciano, Programa de Pós-Graduação em Administração Pontifícia Universidade

Católica do Rio Grande do Sul, Rio Grande do Sul, Brasil , Professora Titular da Faculdade de

Administração, Contabilidade e Economia, Membro Permanente do Programa de Pós-Graduação em

Administração E-mail: eluciano@pucrs.br

Published by/ Publicado por: TECSI FEA USP – 2016 All rights reserved 480 Klein, R. H. & Luciano, E. M. 1. INTRODUCTION

The popularization of software intended to mitigate threats to Information

Security has given users a sensation that software and hardware are enough to reduce

Information Security breaches and suppress threats. This mistaken sensation may have

originated from obtaining partial information on the subject or from the lack of

adequate awareness (Liang and Xue, 2009) and also negligence, apathy, mischief, and

resistance (Safa, Von Solms and Furnell, 2015), This is a human factor that might

increase vulnerabilities provided it could influence Information Systems (IS) users to

behave recklessly (Liginlal, Sim and Khansa, 2009). Human aspects of information

security remain a critical and challenging component of a safe and secure information

environment

However, this misconception alone does not explain breaches in Information

Security caused by human factors. Another important insight is the efforts perceived as

necessary to achieve the responsible behavior, which added to aspects such as

indifference to the guidelines and human error may also induce vulnerability and

breaches. Information Security refers to the protection of organizational assets from

loss, undue exposure and damage (Dazazi et al., 2009). This concern has been gaining

ground and popularity in recent decades due to IT artifacts that have gradually enabled

the generation, processing and ubiquity of unprecedented information and have also

fostered the possibility of threats (King and Raja, 2012).This article investigates the

impact of user behavior on Information Security vulnerabilities. The study is grounded

in user perceptions related to threats, control, and the effort to behave responsibly.

Vance, Siponen and Pahnila (2012) conceptualize the vulnerability as the probability of

an unwanted incident occurring if no measures are taken to prevent it. Roratto and Dias

(2014) define vulnerability as a weakness in the computer system or its surroundings,

which can become a security risk.

According to Kjell (2015), the organizations choose optimal defense, which is

costly and consists in investing in Information Technology Security to protect their

assets. Simultaneously, hackers collect information in various manners, and attempt to

gain access to organizations’ information security breaches, collected by the

organizations themselves.

Albrechtsen and Hovden (2009) consider the users to be a liability when they

do not possess the necessary skills and knowledge, thereby causing the reckless use of

network connections and information or practicing unsafe acts within the organization.

User perceptions may be enhanced through Security Education, Training, and

Awareness (SETA) programs, which explain potential threats facing the organization

and provide methods for users to improve information security practices (D’Arcy et al.

2009).

However the perception of threat is not the only thing that encourages

responsible behavior provided the threat imminence perception varies from individual

to individual. The effort required for responsible behavior and the relative perception of

control in addition to the mitigating factors of responsible behavior that result from the

context experienced by the individual are also based on individual perception. When

threats are not perceived as eminent, the efforts to follow rules and best practices in

Information Security may be considered unnecessary unproductive and merely a

regulatory formality (Herath and Rao 2009a). In this circumstance the procedures that

JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br What influences information security behavior? A study with Brazilian users 481 provide Information Security may be unsuccessful or circumvented depending on the

individual perception about the balance of control/punishment and benefits.

Furthermore, the disgruntlement of a user with organizations or people who set

standards may produce actions that circumvent security either as a means of

demonstrating their discontent (Willisom and Warkentin 2013) or simply through low

motivation to comply with them (Kelloway et al. 2010).

Da Veiga and Eloff (2010) argue that the Information Security approach in an

organization should be focused on employee behavior, provided that success or failure

on protecting information depends on what employees do or don’t do. So the way users

behave may stem from perceptions about perceived threats, controls and punishments

and about perceived effort as well as environmental factors such as work overload,

fatigue (Kraemer and Carayon, 2007) and disgruntlement (Willison and Warkentin,

2013; Kelloway et al., 2010). These factors may contribute to behaviors that generate

vulnerability and breaches, compromising all the Information Security principles and

turning information into useless pieces of data due to their loss of reliability.

Based on the concepts addressed, this article aims to identify the influence of the

user’s perception of the threat, control, effort and disgruntlement in safe behavior

regarding Information Security.

This introduction shows the subject, research problem and goal. The theoretical

basis is presented in Section 2, followed by the research model and hypotheses (Section

3). The methodological aspects are presented in Section 4, followed by the results

(Section 5) and the discussion of the findings (Section 6).

2. THEORETICAL BACKGROUND

According to Liang and Xue (2010) the perceived threat is defined by the degree

in which an individual perceives a malicious IT attack as dangerous or harmful. IT

users develop threat perception, monitoring their computing environment and detecting

potential dangers. Based on health psychology and risk analysis, the authors suggest

that the perception of threat is formed by perceived susceptibility and perceived

severity.

Perceived susceptibility is defined by Liang and Xue (2010) as the subjective

probability of an individual that a malicious IT attack (malware) will adversely affect it.

On the other hand, the perceived severity is defined as the degree to which an individual

perceives that adverse effects caused by malware will be severe. According to the

authors, previous studies on health protection behavior have provided a theoretical and

empirical foundation on careful behavior among patients, influenced by perceptions

related to the threat, which can be adapted to Information Security. The authors argue

that the perceived likelihood and the negative consequences of the severity of a disease

may result in the perception of a health threat, which motivates people to take measures

to protect their health.

The threat assessment may cover the perceived severity of a violation (Herath

and Rao (2009a) or the perceived likelihood of a security breach (2009b). The severity

is the level of potential impact the threat and damage may cause, i.e., the severity of a

security breach and the possibility of an adverse event caused by such (Vance, Siponen

and Pahnila, 2012). Herath and Rao (2009b) found that the perception of the severity of

the breach does not impact on the compliance of regulations or security policies. In

contrast, Workman, Bommer and Straub (2008) found that the perceived severity was

JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br 482 Klein, R. H. & Luciano, E. M. significant for compliance, as well as the likelihood of a security breach. Johnston and

Warkentin (2010) found indications that perceptions regarding the severity of a threat

negatively influence the perceptions regarding the response effectiveness and also

regarding the perceptions of the self-efficacy related to the threat.

Several authors have studied the perception of susceptibility. Ng, Kankanhalli

and Xu (2009) have demonstrated that perceived susceptibility affects users’ behavior

regarding emails. According to the authors, when users are aware of the likelihood of

threats (perceived susceptibility) and of the effectiveness of security controls (perceived

benefits), they may make a conscious decision to behave appropriately. However,

perceived severity was not decisive in influencing the users’ safe behavior. The

research of Johnston and Warkentin (2010) was not able to demonstrate that perceived

susceptibility of threats negatively influences the perceived efficacy of response, or that

the perceived susceptibility of threats negatively influences the perception of selfefficacy. However, they demonstrated that perceived severity of the threat negatively

influences perceived efficacy of response and the perceptions of self-efficacy.

According to Herath and Rao (2009a), gaps are security breaches. Moreover

employee negligence and non-compliance with the rules often causes damage to

organizations. However the behavior of the users can help to reduce these gaps by

following better practices, such as protecting data with suitable passwords or logging

off when leaving the computer that is being used. Workman, Bommer and Straub

(2008) show that perceived vulnerability and severity have an effect on users

Information Security behavior.

Herath and Rao (2009b) suggest that perceptions regarding the severity of the

breaches, the effectiveness of the response and self-efficacy are likely to have a positive

effect on attitudes towards security policies, whilst the cost of response negatively

influences favorable attitudes. They also suggest that social influence has a significant

impact on intentions to comply with Information Security policies. The availability of

resources is a significant factor in the increase of self-efficacy, which in turn is

important to predict the intention to comply with Information Security policies.

Moreover, organizational commitment plays a dual role, having a direct impact on

intentions, as well as on promoting the belief that the actions of employees have a

global effect on the Information Security of an organization.

Despite the difference between the results, the consensus among researchers is

that users assess the susceptibility and the severity of negative consequences in order to

determine the threat they are facing.

Apart from the use of technologies that aim to guarantee the organizational

Information Security these technologies are not enough to avoid gaps because

Information Security cannot be defined or understood as a pure technical problem

(Kearney and Kruger, 2016). Based on that, studies about the Information Security

users’ behavior are obtaining more attention (Herath e Rao 2009b).

3. MODEL AND HYPOTHESES

The model (Figure 1) was developed based on the theoretical background

exposed previously.

According to Ng, Kankanhalli and Xu (2009), the risks and damages perception

in Information Security and its possibility of occurrence depend on the measurement

capacity of individuals.

JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br What influences information security behavior? A study with Brazilian users 483 Figure 1 – Theoretical model and hypotheses

It covers the perception of susceptibility to the threat and the severity of the

threat, because when individuals perceive a greater susceptibility to security incidents,

they are likely to exhibit a higher level of safe behavior. Based on these concepts, the

following hypothesis was formulated:

H1: The perceived susceptibility of the threat to Information Security positively

influences safe behavior regarding Information Security.

Workman, Bommer and Straub (2008) found that the perceived severity was

significant for compliance with Information Security Policy guidelines and for the

likelihood of a security breach. For Liang and Xue (2009), the perceived severity is

defined as the degree to which an individual perceives that negative consequences

caused by malware will be severe. According to Ng, Kankanhalli and Xu (2009), when

users are aware of the susceptibility and severity of the threats, they can make informed

decisions to exercise adequate preventive behavior. Bearing these concepts in mind, the

following hypothesis was formulated:

H2: The perceived severity of the threat to Information Security positively

influences safe behavior regarding Information Security.

Herath and Rao (2009b), in their research on the effects of deterrence, found that

the certainty of detection has a positive impact on the intentions to comply with the

Security Policy guidelines. When employees perceive a high probability of being

discovered violating the guidelines, they will be more likely to follow them. This

concept produced the following hypothesis: JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br 484 Klein, R. H. & Luciano, E. M. H3: The perception of the certainty of detection of not following the guidelines

on Information Security positively influences safe behavior regarding Information

Security.

Sanctions are defined as punishments, material or otherwise, incurred by an

employee for failure to comply with information security policies (Bulgurcu, Cavusoglu

and Benbasat, 2010). Examples of sanctions include demotions, loss of reputation,

reprimands, financial or non-financial penalties, and unfavorable evaluations. The

perception of these sanctions regarding non-compliance with the rules influences the

user to behave responsibly, in accordance with the certainty of detection of noncompliance with the security standards, the severity and the swiftness of punishment

(Herath and Rao, 2009a and 2009b). From the combination of these concepts the

following hypothesis was formulated:

H4: The perception of the Punishment Severity for not following the guidelines

regarding Information Security positively influences safe behavior in terms of

Information Security.

According to Liang and Xue, 2009, the safeguard effort refers to physical and

cognitive efforts – such as time, money, inconvenience and understanding – necessary

for the safeguarding action. These efforts tend to create behavioral barriers and reduce

the motivation for Safe Behavior regarding Information Security, due to the cost-benefit

analysis. The authors cite the example of people’s behavior regarding their health, when

comparing the costs and benefits of a particular healthy behavior before deciding to

practice it. If the costs are considered high when compared to the benefits, people are

not likely to adopt the behavior recommended by health professionals. Thus, the user’s

motivation to avoid any Information Security threat may be mitigated by the potential

cost to safeguard (Liang and Xue, 2010). According to these concepts the following

hypothesis was developed:

H5: The perception of effort to safeguard when following the Information

Security guidelines negatively influences safe behavior related to Information Security.

There is the possibility of breaches occurring due to lack of motivation to follow

the safety guidelines (Kelloway et al., 2010), disgruntlement with the organization or

colleagues (Willison and Warkentin, 2013; Spector et al, 2006), or as a form of protest

resulting from an unsatisfactory situation (Spector et al., 2006). According to this

possibility the following hypothesis was formulated:

H6: Satisfaction with colleagues, superiors or organization positively influences

Safe Behavior regarding Information Security.

The control variable presented on the theoretical model indicates that the data

analysis will be performed on the sample of respondents who received verbal or written

guidance on Information Security from the organization for which they worked at the

data collection time. This selection allows us to obtain the perceptions of respondents

who already have some insight into the threats such as the level of control and

monitoring and the punishment for not following the guidelines received. It also allows

the comparison of the results with the group of respondents who did not receive the

same kind of guidance.

4. METHODOLOGY

The research used an exploratory approach by conducting a survey through a

self-administered questionnaire for quantitative cross-sectional data collection.

JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br What influences information security behavior? A study with Brazilian users 485 The population of this survey was composed of Information Systems users in an

organizational environment from organizations of any size, industry or field of activity.

However, the respondents had to have received in writing or oral Information Security

guidance, by the organization they worked for by the time they completed the

questionnaire. The sampling process was not probabilistic for convenience (HAIR et al.,

2005).

The survey instrument was developed from consolidated instruments on the

subject, as shown in the Appendix. The instrument used a Likert type scale ranging

across five categories, from 1 (strongly disagree) to 5 (strongly agree), based on the

instruments used in the three original surveys used as a reference for the theoretical

model.

During pre-tests, a set of previous validations was conducted in order to have a

suitable measuring instrument. This is the recommendation of Malhotra (2009) when

the instrument is formed by others previously used in other researches.

The first part of the instrument validation was carried out by face and content

validation and involved a group of professors in MIS. As a result, some questions were

amended in terms of their content and order. The most significant change was the

alteration of the construct of Disgruntlement, originated in Willison and Warkentin

(2013), which were reversed: after this step of validation it went on to validate the

disgruntlement from the perspective of the lack of contentment.

The validation of the instrument was performed by applying the instrument to a

sample of 229 Brazilian IT users (non-probability sample for convenience). After the

exclusion of incomplete questionnaires, 216 valid respondents remained. However,

after applying the filter keeping only those respondents who received some guidance on

Information Security, 135 respondents remained valid in the pre-test sample.

After the adaptation of the questionnaire, data collection was conducted through

a printed form and simultaneously through an electronic survey in order to increase the

amount of respondents. A number of 171 valid questionnaires were obtained

(completely filled in and without errors). Among them, 112 received some Information

Security guidance and with work experience from 1 to 30 years. This sample was used

in the final data analysis. When considering 112 respondents and 15 questions in the

final data analysis, the rate of respondents per question was 7.46, which is higher than

the rate of five recommended by Malhotra (2009). As indicated by Hair et al. (2011) the

T-Test was conducted to ascertain whether there were differences between the

responses of the samples collected on paper compared to the responses obtained from

the online survey, which did not occur.

All the analyses were performed using the SPSS Statistics version 20 software.

5. RESULTS

In order to validate the reliability of the survey instrument in the pre-test phase,

Cronbach’s alpha was used. At this stage of validation, a multivariate analysis was also

performed in order to verify the structure of the factors that make up the scales. In order

to do this, a principal component analysis with varimax rotation was used, following the

recommendations of Hair et al. (2009).

The research sample (final collection) consisted of 112 respondents. Figure 2

shows respondents education profile.

JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br 486 Klein, R. H. & Luciano, E. M. Figure 2 – Respondents Gender and Education Profiles

Figure 3 shows gender versus years of work experience. Figure 3 – Respondents’ Gender and Work Experience

The normality of the collected data was verified with the Descriptive Univariate

Analysis. Verification of normality was performed through the analysis of symmetry

and of kurtosis.

The reliability of the scales was assessed by Cronbach’s Alpha coefficient. A

Cronbach’s Alpha of 0.767 was obtained for the set of all 15 mandatory variables,

which measured the constructs of the models. Cronbach alphas for each construct are

shown in Table 1. JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br What influences information security behavior? A study with Brazilian users Construct Variables 487 Cronbach’s

Alpha Threat Susceptibility SUS1, SUS2, SUS3 0.857 Disgruntlement DESC1, DESC2, DESC3 0.819 Punishment Severity PUNSEV2, PUNSEV3 0.852 Safe Behavior BEH1, BEH2, BEH3 0.725 Certainty of detection DETCERT1, DETCERT2 0.684 Threat Severity SEV1 SEV3 0.615 Effort in Safeguarding PSC2 PSC3 and PSC4* 0.591 * Optional questions

Table 1 – Cronbach’s Alpha for each construct

The optional questions of the variables obtained a low coefficient of Cronbach’s

Alpha due to the low number of respondents who answered these questions (N=35).

Thus the construct Effort in Safeguarding and respective variables (based on LIANG

and XUE 2010) were not used in the survey resulting in the absence of support for the

H5 hypothesis. Other statistical indicators were also taken into account in that decision,

such as the difference in the T-Test and the lack of convergence for the respective

factor in the Convergent Factor Analysis, shown in Table 2. JISTEM, Brazil Vol. 13, No. 3, Set/Dez., 2016 pp. 479-496 www.jistem.fea.usp.br 488 Klein, R. H. & Luciano, E. M. Variables Factors*

1 (SUS) 2 (DESC) 3 (PUNSEV) 4 (DECERT) 5 (SEV) SUS1 0.894 -0.031 -0.148 0.080 0.146 SUS2 0.868 -0.085 -0.092 -0.003 0.060 SUS3 0.854 0.056 -0.003 -0.081 0.046 DESC1 0.015 0.875 0.023 0.054 -0.106 DES…Read more

This is the end of the preview. Download to see the full text

Published by
Thesis
View all posts