Wednesday, July 31, 2019

Adventures of Huck Finn

American society during the time when the book was written. The protagonist, Houck, goes through a significant development and changes his views about life different from what the society has taught him. Throughout the story, characterization of the society and how it works, progress In Husk's relationship with Jim, and explanation why Houck respects certain individuals and why he is critical of some are evident.The picture of Southern society that can be derived from the book is a society that sacks an effective government and full of violence. In chapter five, a Judge releases an order to give Husk's custody to his father despite the father's history of neglect and abuse. This makes public officials' wisdom and morality questionable. In chapter eighteen of the book, it is revealed that there is a feud going on between the Grandiose and Sheepherders that has been going on for years and that multiple lives have been lost. In chapter twenty-two of the book, a mob charges to Shrubbery' s a house to lynch him for shooting a drunken man.It shows that there is a myriad of lenience but also a lack of rightful laws. People could execute someone accused of crimes without legal due process. This Is the kind of society Houck grew up In. The knowledge Houck gained from the society creates a conflict with the progress of his relationship with Jim. When Jim realizes that Houck is just pulling a trick on him saying that their separation due to a heavy fog is Just a dream, Jims feelings are hurt and Houck feels bad and apologizes. This is when Houck becomes aware that Jim cares about him and he cares about him too.Although when they think that they are close o Cairo, Husks conscience bothers him because he is actually letting Jim free which the society has taught him to be a wrong doing. Houck almost tells on Jim but decides to disregard morality. Their friendship grows stronger through series of events and eventually Houck decides that he would rather go to hell If It means f ollowing his gut and not the society cruel principles. Husk's relationship with Jim changes from weak to strong and makes him change his views about life particularly sense of morality.Houck respects Tom Sawyer and Jim while he Is critical of the duke and the pippin. Houck utters in chapter thirty-four that if he had Tom Sawyers head, he would not trade it off for anything. In addition, in most of his adventures he thinks what Tom Sawyer would do. In regards to Jim, the more Houck finds out about Jim, like how much he cares about his family especially his children, the more he finds out how great of a person Jim is and the greater the admiration he has for him. On the other hand, the people that pretend to be the duke and the dauphin are the ones Houck dislikes and disapproves.This is evident when Houck gets the $6,000 in gold that he duke and dauphin scammed from Mary Jane and her sisters and tries to give it back. In chapter twenty-four, the duke and the dauphin make Houck â€Å" ashamed of the human race†. Houck looks up to people who mean no harm to others. The book contains a myriad of lessons and questions about different aspects of life. It also Informs the readers of what the American society used to be like, one of It being the noble goodness of a person derives from the purity of their ancestry. To be looked up to and liked. The book undoubtedly has some though-provoking subjects.

Tuesday, July 30, 2019

User Authentication Through Mouse Dynamics

16 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 1, JANUARY 2013 User Authentication Through Mouse Dynamics Chao Shen, Student Member, IEEE, Zhongmin Cai, Member, IEEE, Xiaohong Guan, Fellow, IEEE, Youtian Du, Member, IEEE, and Roy A. Maxion, Fellow, IEEE Abstract—Behavior-based user authentication with pointing devices, such as mice or touchpads, has been gaining attention. As an emerging behavioral biometric, mouse dynamics aims to address the authentication problem by verifying computer users on the basis of their mouse operating styles.This paper presents a simple and ef? cient user authentication approach based on a ? xed mouse-operation task. For each sample of the mouse-operation task, both traditional holistic features and newly de? ned procedural features are extracted for accurate and ? ne-grained characterization of a user’s unique mouse behavior. Distance-measurement and eigenspace-transformation techniques are applied to obtain featur e components for ef? ciently representing the original mouse feature space.Then a one-class learning algorithm is employed in the distance-based feature eigenspace for the authentication task. The approach is evaluated on a dataset of 5550 mouse-operation samples from 37 subjects. Extensive experimental results are included to demonstrate the ef? cacy of the proposed approach, which achieves a false-acceptance rate of 8. 74%, and a false-rejection rate of 7. 69% with a corresponding authentication time of 11. 8 seconds. Two additional experiments are provided to compare the current approach with other approaches in the literature.Our dataset is publicly available to facilitate future research. Index Terms—Biometric, mouse dynamics, authentication, eigenspace transformation, one-class learning. I. INTRODUCTION T HE quest for a reliable and convenient security mechanism to authenticate a computer user has existed since the inadequacy of conventional password mechanism was reali zed, ? rst by the security community, and then gradually by the Manuscript received March 28, 2012; revised July 16, 2012; accepted September 06, 2012. Date of publication October 09, 2012; date of current version December 26, 2012.This work was supported in part by the NSFC (61175039, 61103240, 60921003, 60905018), in part by the National Science Fund for Distinguished Young Scholars (60825202), in part by 863 High Tech Development Plan (2007AA01Z464), in part by the Research Fund for Doctoral Program of Higher Education of China (20090201120032), and in part by Fundamental Research Funds for Central Universities (2012jdhz08). The work of R. A. Maxion was supported by the National Science Foundation under Grant CNS-0716677. Any opinions, ? dings, conclusions, or recommendations expressed in this material are those of the authors, and do not necessarily re? ect the views of the National Science Foundation. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Sviatoslav Voloshynovskiy. C. Shen, Z. Cai, X. Guan, and Y. Du are with the MOE Key Laboratory for Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an, Shaanxi, 710049, China (e-mail: [email  protected] xjtu. edu. cn; [email  protected] xjtu. edn. cn; [email  protected] xjtu. edu. cn; [email  protected] jtu. edu. cn). R. A. Maxion is with the Dependable Systems Laboratory, Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213 USA (e-mail: [email  protected] cmu. edu). Color versions of one or more of the ? gures in this paper are available online at http://ieeexplore. ieee. org. Digital Object Identi? er 10. 1109/TIFS. 2012. 2223677 public [31]. As data are moved from traditional localized computing environments to the new Cloud Computing paradigm (e. g. , Box. net and Dropbox), the need for better authentication has become more pressing.Recently, several large-scale password leakages exposed users to an unprecedented risk of disclosure and abuse of their information [47], [48]. These incidents seriously shook public con? dence in the security of the current information infrastructure; the inadequacy of password-based authentication mechanisms is becoming a major concern for the entire information society. Of various potential solutions to this problem, a particularly promising technique is mouse dynamics. Mouse dynamics measures and assesses a user’s mouse-behavior characteristics for use as a biometric.Compared with other biometrics such as face, ? ngerprint and voice [20], mouse dynamics is less intrusive, and requires no specialized hardware to capture biometric information. Hence it is suitable for the current Internet environment. When a user tries to log into a computer system, mouse dynamics only requires her to provide the login name and to perform a certain sequence of mouse operations. Extracted behavioral features, based on mouse movements and clicks, are compared to a legitimate user’s pro? le. A match authenticates the user; otherwise her access is denied.Furthermore, a user’s mouse-behavior characteristics can be continually analyzed during her subsequent usage of a computer system for identity monitoring or intrusion detection. Yampolskiy et al. provide a review of the ? eld [45]. Mouse dynamics has attracted more and more research interest over the last decade [2]–[4], [8], [14]–[17], [19], [21], [22], [33], [34], [39]–[41], [45], [46]. Although previous research has shown promising results, mouse dynamics is still a newly emerging technique, and has not reached an acceptable level of performance (e. . , European standard for commercial biometric technology, which requires 0. 001% false-acceptance rate and 1% false-rejection rate [10]). Most existing approaches for mouse-dynamics-based user authentication result in a low authentication accuracy or an unreasonably long authenticatio n time. Either of these may limit applicability in real-world systems, because few users are willing to use an unreliable authentication mechanism, or to wait for several minutes to log into a system.Moreover, previous studies have favored using data from real-world environments over experimentally controlled environments, but this realism may cause unintended side-effects by introducing confounding factors (e. g. , effects due to different mouse devices) that may affect experimental results. Such confounds can make it dif? cult to attribute experimental outcomes solely to user behavior, and not to other factors along the long path of mouse behavior, from hand to computing environment [21], [41]. 1556-6013/$31. 00  © 2012 IEEE SHEN et al. : USER AUTHENTICATION THROUGH MOUSE DYNAMICS 17It should be also noted that most mouse-dynamics research used data from both the impostors and the legitimate user to train the classi? cation or detection model. However, in the scenario of mouse-d ynamics-based user authentication, usually only the data from the legitimate user are readily available, since the user would choose her speci? c sequence of mouse operations and would not share it with others. In addition, no datasets are published in previous research, which makes it dif? cult for third-party veri? cation of previous work and precludes objective comparisons between different approaches.A. Overview of Approach Faced with the above challenges, our study aims to develop a mouse-dynamics-based user authentication approach, which can perform user authentication in a short period of time while maintaining high accuracy. By using a controlled experimental environment, we have isolated inherent behavioral characteristics as the primary factors for mouse-behavior analysis. The overview of the proposed approach is shown in Fig. 1. It consists of three major modules: (1) mouse-behavior capture, (2) feature construction, and (3) training/classi? cation. The ? st module serves to create a mouse-operation task, and to capture and interpret mouse-behavior data. The second module is used to extract holistic and procedural features to characterize mouse behavior, and to map the raw features into distance-based features by using various distance metrics. The third module, in the training phase, applies kernel PCA on the distance-based feature vectors to compute the predominant feature components, and then builds the user’s pro? le using a one-class classi? er. In the classi? cation phase, it determines the user’s identity using the trained classi? r in the distance-based feature eigenspace. B. Purpose and Contributions of This Paper This paper is a signi? cant extension of an earlier and much shorter version [40]. The main purpose and major contributions of this paper are summarized as follows: †¢ We address the problem of unintended side-effects of inconsistent experimental conditions and environmental variables by restricting usersâ€℠¢ mouse operations to a tightly-controlled environment. This isolates inherent behavioral characteristics as the principal factors in mouse behavior analysis, and substantially reduces the effects of external confounding factors. Instead of the descriptive statistics of mouse behaviors usually adopted in existing work, we propose newly-de? ned procedural features, such as movement speed curves, to characterize a user’s unique mouse-behavior characteristics in an accurate and ? ne-grained manner. These features could lead to a performance boost both in authentication accuracy and authentication time. †¢ We apply distance metrics and kernel PCA to obtain a distance-based eigenspace for ef? ciently representing the original mouse feature space.These techniques partially handle behavioral variability, and make our proposed approach stable and robust to variability in behavior data. †¢ We employ one-class learning methods to perform the user authentication task, so that the detection model is Fig. 1. Overview of approach. built solely on the data from the legitimate user. One-class methods are more suitable for mouse-dynamics-based user authentication in real-world applications. †¢ We present a repeatable and objective evaluation procedure to investigate the effectiveness of our proposed approach through a series of experiments.As far as we know, no earlier work made informed comparisons between different features and results, due to the lack of a standard test protocol. Here we provide comparative experiments to further examine the validity of the proposed approach. †¢ A public mouse-behavior dataset is established (see Section III for availability), not only for this study but also to foster future research. This dataset contains high-quality mouse-behavior data from 37 subjects. To our knowledge, this study is the ? rst to publish a shared mouse-behavior dataset in this ? eld. This study develops a mouse-dynamics-based user authenticat ion approach that performs user authentication in a short time while maintaining high accuracy. It has several desirable properties: 1. it is easy to comprehend and implement; 2. it requires no specialized hardware or equipment to capture the biometric data; 3. it requires only about 12 seconds of mouse-behavior data to provide good, steady performance. The remainder of this paper is organized as follows: Section II describes related work. Section III presents a data-collection process. Section IV describes the feature-construction process.Section V discusses the classi? cation techniques for mouse dynamics. Section VI presents the evaluation methodology. Section VII presents and analyzes experimental results. Section VIII offers a discussion and possible extensions of the current work. Finally, Section IX concludes. 18 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 1, JANUARY 2013 II. BACKGROUND AND RELATED WORK In this section, we provide background on mouse- dynamics research, and various applications for mouse dynamics (e. g. , authentication versus intrusion detection).Then we focus on applying mouse dynamics to user authentication. A. Background of Mouse Dynamics Mouse dynamics, a behavioral biometric for analyzing behavior data from pointing devices (e. g. , mouse or touchpad), provides user authentication in an accessible and convenient manner [2]–[4], [8], [14]–[17], [19], [21], [22], [33], [34], [39]–[41], [45], [46]. Since Everitt and McOwan [14] ? rst investigated in 2003 whether users could be distinguished by the use of a signature written by mouse, several different techniques and uses for mouse dynamics have been proposed.Most researchers focus on the use of mouse dynamics for intrusion detection (sometimes called identity monitoring or reauthentication), which analyzes mouse-behavior characteristics throughout the course of interaction. Pusara and Brodley [33] proposed a reauthentication scheme using m ouse dynamics for user veri? cation. This study presented positive ? ndings, but cautioned that their results were only preliminary. Gamboa and Fred [15], [16] were some of the earliest researchers to study identity monitoring based on mouse movements.Later on, Ahmed and Traore [3] proposed an approach combining keystroke dynamics with mouse dynamics for intrusion detection. Then they considered mouse dynamics as a standalone biometric for intrusion detection [2]. Recently, Zheng et al. [46] proposed angle-based metrics of mouse movements for reauthentication systems, and explored the effects of environmental factors (e. g. , different machines). Yet only recently have researchers come to the use of mouse dynamics for user authentication (sometimes called static authentication), which analyzes mouse-behavior characteristics at particular moments.In 2007, Gamboa et al. [17] extended their approaches in identity monitoring [15], [16] into web-based authentication. Later on, Kaminsky e t al. [22] presented an authentication scheme using mouse dynamics for identifying online game players. Then, Bours and Fullu [8] proposed an authentication approach by requiring users to make use of the mouse for tracing a maze-like path. Most recently, a full survey of the existing work in mouse dynamics pointed out that mouse-dynamics research should focus on reducing authentication time and taking the effect of environmental variables into account [21]. B.User Authentication Based on Mouse Dynamics The primary focus of previous research has been on the use of mouse dynamics for intrusion detection or identity monitoring. It is dif? cult to transfer previous work directly from intrusion detection to authentication, however, because a rather long authentication period is typically required to collect suf? cient mouse-behavior data to enable reasonably accurate veri? cation. To our knowledge, few papers have targeted the use of mouse dynamics for user authentication, which will be the central concern of this paper. Hashia et al. [19] and Bours et al. 8] presented some preliminary results on mouse dynamics for user authentication. They both asked participants to perform ? xed sequences of mouse operations, and they analyzed behavioral characteristics of mouse movements to authenticate a user during the login stage. Distance-based classi? ers were established to compare the veri? cation data with the enrollment data. Hashia et al. collected data from 15 participants using the same computer, while Bours et al. collected data from 28 subjects using different computers; they achieved equal-error rates of 15% and 28% respectively.Gamboa et al. [17] presented a web-based user authentication system based on mouse dynamics. The system displayed an on-screen virtual keyboard, and required users to use the mouse to enter a paired username and pin-number. The extracted feature space was reduced to a best subspace through a greedy search process. A statistical model based on the Weibull distribution was built on training data from both legitimate and impostor users. Based on data collected from 50 subjects, the researchers reported an equal-error rate of 6. 2%, without explicitly reporting authentication time.The test data were also used for feature selection, which may lead to an overly optimistic estimate of authentication performance [18]. Recently, Revett et al. [34] proposed a user authentication system requiring users to use the mouse to operate a graphical, combination-lock-like GUI interface. A small-scale evaluation involving 6 subjects yielded an average false-acceptance rate and false-rejection rate of around 3. 5% and 4% respectively, using a distance-based classi? er. However, experimental details such as experimental apparatus and testing procedures were not explicitly reported. Aksari et al. 4] presented an authentication framework for verifying users based on a ? xed sequence of mouse movements. Features were extracted from nine move ments among seven squares displayed consecutively on the screen. They built a classi? er based on scaled Euclidean distance using data from both legitimate users and impostors. The researchers reported an equal-error rate of 5. 9% over 10 users’ data collected from the same computer, but authentication time was not reported. It should be noted that the above two studies were performed on a small number of users—only 6 users in [34], and 10 users in [4]—which may be insuf? ient to evaluate de? nitively the performance of these approaches. The results of the above studies have been mixed, possibly due to the realism of the experiments, possibly due to a lack of real differences among users, or possibly due to experimental errors or faulty data. A careful reading of the literature suggests that (1) most approaches have resulted in low performance, or have used a small number of users, but since these studies do not tend to be replicated, it is hard to pin the discr epancies on any one thing; (2) no research group provided a shared dataset.In our study, we control the experimental environment to increase the likelihood that our results will be free from experimental confounding factors, and we attempt to develop a simple and ef? cient user authentication approach based on mouse dynamics. We also make our data available publicly. III. MOUSE DATA ACQUISITION In this study, we collect mouse-behavior data in a controlled environment, so as to isolate behavioral characteristics as the principal factors in mouse behavior analysis. We offer here SHEN et al. USER AUTHENTICATION THROUGH MOUSE DYNAMICS 19 considerable detail regarding the conduct of data collection, because these particulars can best reveal potential biases and threats to experimental validity [27]. Our data set is available 1. A. Controlled Environment In this study, we set up a desktop computer and developed a Windows application as a uniform hardware and software platform for the coll ection of mouse-behavior data. The desktop was an HP workstation with a Core 2 Duo 3. 0 GHz processor and 2 GB of RAM.It was equipped with a 17 HP LCD monitor (set at 1280 1024 resolution) and a USB optical mouse, and ran the Windows XP operating system. Most importantly, all system parameters relating to the mouse, such as speed and sensitivity con? gurations, were ? xed. The Windows application, written in C#, prompted a user to conduct a mouse-operation task. During data collection, the application displayed the task in a full-screen window on the monitor, and recorded (1) the corresponding mouse operations (e. g. , mouse-single-click), (2) the positions at which the operations occurred, and (3) the timestamps of the operations.The Windows-event clock was used to timestamp mouse operations [28]; it has a resolution of 15. 625 milliseconds, corresponding to 64 updates per second. When collecting data, each subject was invited to perform a mouse-operations task on the same desktop computer free of other subjects; data collection was performed one by one on the same data-collection platform. These conditions make hardware and software factors consistent throughout the process of data collection over all subjects, thus removing unintended side-effects of unrelated hardware and software factors. B.Mouse-Operation Task Design To reduce behavioral variations due to different mouse-operation sequences, all subjects were required to perform the same sequence of mouse operations. We designed a mouse-operation task, consisting of a ? xed sequence of mouse operations, and made these operations representative of a typical and diverse combination of mouse operations. The operations were selected according to (1) two elementary operations of mouse clicks: single click and double click; and (2) two basic properties of mouse movements: movement direction and movement distance [2], [39].As shown in Fig. 2, movement directions are numbered from 1 to 8, and each of them is sel ected to represent one of eight 45-degree ranges over 360 degrees. In addition, three distance intervals are considered to represent short-, middle- and long-distance mouse movements. Table I shows the directions and distances of the mouse movements used in this study. During data collection, every two adjacent movements were separated by either a single click or a double click. As a whole, the designed task consists of 16 mouse movements, 8 single clicks, and 8 double clicks.It should be noted that our task may not be unique. However, the task was carefully chosen to induce users to perform a wide variety of mouse movements and clicks that were both typical and diverse in an individual’s repertoire of daily mouse behaviors. 1The mouse-behavior dataset is available from: http://nskeylab. xjtu. edu. cn/ projects/mousedynamics/behavior-data-set/. Fig. 2. Mouse movement directions: sector 1 covers all operations performed degrees and degrees. with angles between TABLE I MOUSE MO VEMENTS IN THE DESIGNED MOUSE-OPERATION TASK C.Subjects We recruited 37 subjects, many from within our lab, but some from the university at large. Our sample of subjects consisted of 30 males and 7 females. All of them were right-handed users, and had been using a mouse for a minimum of two years. D. Data-Collection Process All subjects were required to participate in two rounds of data collection per day, and waited at least 24 hours between collections (ensuring that some day-to-day variation existed within the data). In each round, each subject was invited, one by one, to perform the same mouse-operation task 10 times.A mouse-operation sample was obtained when a subject performed the task one time, in which she ? rst clicked a start button on the screen, then moved the mouse to click subsequent buttons prompted by the data-collection application. Additionally, subjects were instructed to use only the external mouse device, and they were advised that no keyboard would be needed. S ubjects were told that if they needed a break or needed to stretch their hands, they were to do so after they had accomplished a full round. This was intended to prevent arti? cially anomalous mouse operations in the middle of a task.Subjects were admonished to focus on the task, as if they were logging into their own accounts, and to avoid distractions, such as talking with the experimenter, while the task was in progress. Any error in the operating process (e. g. , single-clicking a button when requiring double-clicking it) caused the current task to be reset, requiring the subject to redo it. 20 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 1, JANUARY 2013 TABLE II MOUSE DYNAMICS FEATURES Subjects took between 15 days and 60 days to complete data collection.Each subject accomplished 150 error-free repetitions of the same mouse-operation task. The task took between 6. 2 seconds and 21. 3 seconds, with an average of 11. 8 seconds over all subjects. The ? nal dataset contained 5550 samples from 37 subjects. IV. FEATURE CONSTRUCTION In this section, we ? rst extract a set of mouse-dynamics features, and then we use distance-measurement methods to obtain feature-distance vectors for reducing behavioral variability. Next, we utilize an eigenspace transformation to extract principal feature components as classi? er input. A.Feature Extraction The data collected in Section III are sequences of mouse operations, including left-single-clicks, left-double-clicks, and mouse-movements. Mouse features were extracted from these operations, and were typically organized into a vector to represent the sequence of mouse operations in one execution of the mouse-operation task. Table II summarizes the derived features in this study. We characterized mouse behavior based on two basic types of mouse operations—mouse click and mouse movement. Each mouse operation was then analyzed individually, and translated into several mouse features.Our study divi ded these features into two categories: †¢ Holistic features: features that characterize the overall properties of mouse behaviors during interactions, such as single-click and double-click statistics; †¢ Procedural features: features that depict the detailed dynamic processes of mouse behaviors, such as the movement speed and acceleration curves. Most traditional features are holistic features, which suf? ce to obtain a statistical description of mouse behavior, such as the mean value of click times. They are easy to compute and comprehend, but they only characterize general attributes of mouse behavior.In our study, the procedural features characterize in-depth procedural details of mouse behavior. This information more accurately re? ects the ef? ciency, agility and motion habits of individual mouse users, and thus may lead to a performance boost for authentication. Experimental results in Section VII demonstrate the effectiveness of these newly-de? ned features. B. Dis tance Measurement The raw mouse features cannot be used directly by a classi? er, because of high dimensionality and behavioral variability. Therefore, distance-measurement methods were applied to obtain feature-distance vectors and to mitigate the effects of these issues.In the calculation of distance measurement, we ? rst used the Dynamic Time Warping (DTW) distance [6] to compute the distance vector of procedural features. The reasons for this choice are that (1) procedural features (e. g. , movement speed curve) of two data samples are not likely to consist of the exactly same number of points, whether these samples are generated by the same or by different subjects; (2) DTW distance can be applied directly to measure the distance between the procedural features of two samples without deforming either or both of the two sequences in order to get an equal number of points.Next, we applied Manhattan distance to calculate the distance vector of holistic features. The reasons for th is choice are that (1) this distance is independent between dimensions, and can preserve physical interpretation of the features since its computation is the absolute value of cumulative difference; (2) previous research in related ? elds (e. g. , keystroke dynamics) reported that the use of Manhattan distance for statistical features could lead to a better performance [23]. ) Reference Feature Vector Generation: We established the reference feature vector for each subject from her training feature vectors. Let , be the training set of feature vectors for one subject, where is a -dimensional mouse feature vector extracted from the th training sample, and is the number of training samples. Consider how the reference feature vector is generated for each subject: Step 1: we computed the pairwise distance vector of procedural features and holistic features between all pairs of training feature vectors and .We used DTW distance to calculate the distance vector of procedural features for measuring the similarity between the procedural components of the two feature vectors, and we applied Manhattan distance to calculate the distance vector of holistic features . (1) where , and represents the procedural components of represents the holistic components. SHEN et al. : USER AUTHENTICATION THROUGH MOUSE DYNAMICS 21 Step 2: we concatenated the distance vectors of holistic features and procedural features together to obtain a distance vector for the training feature vectors and by (2) Step 3: we normalized vector: to get a scale-invariant feature nd sample covariance . Then we can obtain the mean of such a training set by (5) (6) (3) is the mean of all where pairwise distance vectors from the training set, and is the corresponding standard deviation. Step 4: for each training feature vector, we calculated the arithmetic mean distance between this vector and the remaining training vectors, and found the reference feature vector with minimum mean distance. (4) 2) Feature-Dis tance Vector Calculation: Given the reference feature vector for each subject, we then computed the feature-distance vector between a new mouse feature vector and the reference vector.Let be the reference feature vector for one subject; then for any new feature vector (either from the legitimate user or an impostor), we can compute the corresponding distance vector by (1), (2) and (3). In this paper, we used all mouse features in Table II to generate the feature-distance vector. There are 10 click-related features, 16 distance-related features, 16 time-related features, 16 speed-related features, and 16 acceleration-related features, which were taken together and then transformed to a 74-dimensional feature-distance vector that represents each mouse-operation sample. C.Eigenspace Computation: Training and Projection It is usually undesirable to use all components in the feature vector as input for the classi? er, because much of data will not provide a signi? cant degree of uniquene ss or consistency. We therefore applied an eigenspace-transformation technique to extract the principal components as classi? er input. 1) Kernel PCA Training: Kernel principal component analysis (KPCA) [37] is one approach to generalizing linear PCA to nonlinear cases using kernel methods. In this study, the purpose of KPCA is to obtain the principal components of the original feature-distance vectors.The calculation process is illustrated as follows: For each subject, the training set represents a set of feature-distance vectors drawn from her own data. Let be the th feature-distance vector in the training set, and be the number of such vectors. We ? rst mapped the measured vectors into the hyperdimensional feature space by the nonlinear mapping Here we centered the mapped point with the corresponding mean as . The principal components were then computed by solving the eigenvalue problem: (7) where and . Then, by de? ning a kernel matrix (8) we computed an eigenvalue problem for t he coef? ients is now solely dependent on the kernel function , that (9) For details, readers can refer to B. Scholkopf et al. [37]. Generally speaking, the ? rst few eigenvectors correspond to large eigenvalues and most information in the training samples. Therefore, for the sake of providing the principal components to represent mouse behavior in a low-dimensional eigenspace, and for memory ef? ciency, we ignored small eigenvalues and their corresponding eigenvectors, using a threshold value (10) is the accumulated variance of the ? st largest eigenwhere values with respect to all eigenvalues. In this study, was chosen as 0. 95 for all subjects, with a range from 0 to 1. Note that we used the same for different subjects, so may be different from one subject to another. Speci? cally, in our experiments, we observed that the number of principal components for different subjects varied from 12 to 20, and for an average level, 17 principal components are identi? ed under the threshold of 0. 95. 2) Kernel PCA Projection: For the selected subject, taking the largest eigenvalues and he associated eigenvectors, the transform matrix can be constructed to project an original feature-distance vector into a point in the -dimensional eigenspace: (11) As a result, each subject’s mouse behavior can be mapped into a manifold trajectory in such a parametric eigenspace. It is wellknown that is usually much smaller than the dimensionality of the original feature space. That is to say, eigenspace analysis can dramatically reduce the dimensionality of input samples. In this way, we used the extracted principal components of the feature-distance vectors as input for subsequent classi? ers. 22IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 1, JANUARY 2013 V. CLASSIFIER IMPLEMENTATION This section explains the classi? er that we used, and introduces two other widely-used classi? ers. Each classi? er analyzes mouse-behavior data, and discriminates between a legitimate user and impostors. A. One-Class Classi? er Overview User authentication is still a challenging task from the pattern-classi? cation perspective. It is a two-class (legitimate user versus impostors) problem. In the scenario of mouse-dynamicsbased user authentication, a login user is required to provide the user name and to perform a speci? mouse-operation task which would be secret, like a password. Each user would choose her own mouse-operations task, and would not share that task with others. Thus, when building a model for a legitimate user, the only behavioral samples of her speci? c task are her own; other users’ (considered as impostors in our scenario) samples of this task are not readily available. In this scenario, therefore, an appropriate solution is to build a model based only on the legitimate user’s data samples, and use that model to detect impostors. This type of problem is known as one-class classi? ation [43] or novelty/anomaly detection [25], [26]. We thus focused our attention on this type of problem, especially because in a real-world situation we would not have impostor renditions of a legitimate user’s mouse operations anyway. B. Our Classi? er—One-Class Support Vector Machine Traditional one-class classi? cation methods are often unsatisfying, frequently missing some true positives and producing too many false positives. In this study, we used a one-class Support Vector Machine (SVM) classi? er, introduced by Scholkopf et al. [36], [38]. One-class SVMs have been successfully applied to a number of real-life classi? ation problems, e. g. , face authentication, signature veri? cation and keystroke authentication [1], [23]. In our context, given training samples belonging to one subject, , each sample has features (corresponding to the principal components of the feature-distance vector for that subject). The aim is to ? nd a hyperplane that separates the data points by the largest margin. To separ ate the data points from the origin, one needs to solve the following dual quadratic programming problem [36], [38]: the origin, and is the kernel function. We allow for nonlinear decision boundaries. Then the decision function 13) will be positive for the examples from the training set, where is the offset of the decision function. In essence, we viewed the user authentication problem as a one-class classi? cation problem. In the training phase, the learning task was to build a classi? er based on the legitimate subject’s feature samples. In the testing phase, the test feature sample was projected into the same high-dimensional space, and the output of the decision function was recorded. We used a radial basis function (RBF) in our evaluation, after comparative studies of linear, polynomial, and sigmoid kernels based on classi? ation accuracy. The SVM parameter and kernel parameter (using LibSVM [11]) were set to 0. 06 and 0. 004 respectively. The decision function would gen erate â€Å" † if the authorized user’s test set is input; otherwise it is a false rejection case. On the contrary, â€Å" † should be obtained if the impostors’ test set is the input; otherwise a false acceptance case occurs. C. Other Classi? ers—Nearest Neighbor and Neural Network In addition, we compared our classi? er with two other widely-used classi? ers, KNN and neural network [12]. For KNN, in the training phase, the nearest neighbor classi? r estimated the covariance matrix of the training feature samples, and saved each feature sample. In the testing phase, the nearest neighbor classi? er calculated Mahalanobis distance from the new feature sample to each of the samples in the training data. The average distance, from the new sample to the nearest feature samples from the training data, was used as the anomaly score. After multiple tests with ranging from 1 to 5, we obtained the best results with , detailed in Section VII. For the neural network, in the training phase a network was built with input nodes, one output node, and hidden nodes.The network weights were randomly initialized between 0 and 1. The classi? er was trained to produce a 1. 0 on the output node for every training feature sample. We trained for 1000 epochs using a learning rate of 0. 001. In the testing phase, the test sample was run through the network, and the output of the network was recorded. Denote to be the output of the network; intuitively, if is close to 1. 0, the test sample is similar to the training samples, and with close to 0. 0, it is dissimilar. VI. EVALUATION METHODOLOGY This section explains the evaluation methodology for mouse behavior analysis.First, we summarize the dataset collected in Section III. Next, we set up the training and testing procedure for our one-class classi? ers. Then, we show how classi? er performance was calculated. Finally, we introduce a statistical testing method to further analyze experimental results. (12) where is the vector of nonnegative Lagrangian multipliers to be determined, is a parameter that controls the trade-off between maximizing the number of data points contained by the hyperplane and the distance of the hyperplane from SHEN et al. : USER AUTHENTICATION THROUGH MOUSE DYNAMICS 23A. Dataset As discussed in Section III, samples of mouse-behavior data were collected when subjects performed the designed mouseoperation task in a tightly-controlled environment. All 37 subjects produced a total of 5550 mouse-operation samples. We then calculated feature-distance vectors, and extracted principal components from each vector as input for the classi? ers. B. Training and Testing Procedure Consider a scenario as mentioned in Section V-A. We started by designating one of our 37 subjects as the legitimate user, and the rest as impostors. We trained the classi? er and ested its ability to recognize the legitimate user and impostors as follows: Step 1: We trained the classi? er to b uild a pro? le of the legitimate user on a randomly-selected half of the samples (75 out of 150 samples) from that user. Step 2: We tested the ability of the classi? er to recognize the legitimate user by calculating anomaly scores for the remaining samples generated by the user. We designated the scores assigned to each sample as genuine scores. Step 3: We tested the ability of the classi? er to recognize impostors by calculating anomaly scores for all the samples generated by the impostors.We designated the scores assigned to each sample as impostor scores. This process was then repeated, designating each of the other subjects as the legitimate user in turn. In the training phase, 10-fold cross validation [24] was employed to choose parameters of the classi? ers. Since we used a random sampling method to divide the data into training and testing sets, and we wanted to account for the effect of this randomness, we repeated the above procedure 50 times, each time with independently selected samples drawn from the entire dataset. C. Calculating Classi? r Performance To convert these sets of classi? cation scores of the legitimate user and impostors into aggregate measures of classi? er performance, we computed the false-acceptance rate (FAR) and false-rejection rate (FRR), and used them to generate an ROC curve [42]. In our evaluation, for each user, the FAR is calculated as the ratio between the number of false acceptances and the number of test samples of impostors; the FRR is calculated as the ratio between the number of false rejections and the number of test samples of legitimate users.Then we computed the average FAR and FRR over all subjects. Whether or not a mouse-operation sample generates an alarm depends on the threshold for the anomaly scores. An anomaly score over the threshold indicates an impostor, while a score under the threshold indicates a legitimate user. In many cases, to make a user authentication scheme deployable in practice, minimizing the possibility of rejecting a true user (lower FRR) is sometimes more important than lowering the probability of accepting an impostor [46]. Thus we adjusted the threshold according to the FRR for the training data.Since calculation of the FRR requires only the legitimate user’s data, no impostor data was used for determining the threshold. Speci? cally, the threshold is set to be a variable ranging from , and will be chosen with a relatively low FRR using 10-fold cross validation on the training data. After multiple tests, we observe that setting the threshold to a value of 0. 1 yields a low FRR on average2. Thus, we show results with a threshold value of 0. 1 throughout this study. D. Statistical Analysis of the Results To evaluate the performance of our approach, we developed a statistical test using the half total error rate (HTER) and con? ence-interval (CI) evaluation [5]. The HTER test aims to statistically evaluate the performance for user authentication, which is de ? ned by combining false-acceptance rate (FAR) and falserejection rate (FRR): (14) Con? dence intervals are computed around the HTER as , and and are computed by [5]: (15) % % % (16) where NG is the total number of genuine scores, and NI is the total number of impostor scores. VII. EXPERIMENTAL RESULTS AND ANALYSIS Extensive experiments were carried out to verify the effectiveness of our approach. First, we performed the authentication task using our approach, and compared it with two widely-used classi? rs. Second, we examined our primary results concerning the effect of eigenspace transformation methods on classi? er performance. Third, we explored the effect of sample length on classi? er performance, to investigate the trade-off between security and usability. Two additional experiments are provided to compare our method with other approaches in the literature. A. Experiment 1: User Authentication In this section, we conducted a user authentication experiment, and compared our c lassi? er with two widely-used ones as mentioned in Section V-C. The data used in this experiment consisted of 5550 samples from 37 subjects.Fig. 3 and Table III show the ROC curves and average FARs and FRRs of the authentication experiment for each of three classi? ers, with standard deviations in parentheses. Table III also includes the average authentication time, which is the sum of the average time needed to collect the data and the average time needed to make the authentication decision (note that since the latter of these two times is always less than 0. 003 seconds in our classi? ers, we ignore it in this study). Our ? rst observation is that the best performance has a FAR of 8. 74% and a FRR of 7. 96%, obtained by our approach (one-class SVM).This result is promising and competitive, and the behavioral samples are captured over a much shorter period of time 2Note that for different classi? ers, there are different threshold intervals. For instance, the threshold interval fo r neural network detector is [0, 1], and for one. For uniform presentation, we mapped all of intervals class SVM, it is . to 24 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 1, JANUARY 2013 TABLE IV HTER PERFORMANCE AND CONFIDENCE INTERVAL AT CONFIDENCE LEVELS DIFFERENT Fig. 3. ROC curves for the three different classi? rs used in this study: oneclass SVM, neural network, and nearest neighbor. TABLE III FARs AND FRRs OF USER AUTHENTICATION EXPERIMENT (WITH STANDARD DEVIATIONS IN PARENTHESES) information about mouse behavior, which could enhance performance. Finally, we conducted a statistical test, using the HTER and CI evaluation as mentioned in Section VI-D, to statistically evaluate the performance of our approach. Table IV summarizes the results of this statistical evaluation at different con? dence levels. The result shows that the proposed approach provides the lowest HTER in comparison with the other two classi? ers used in our study; the 95% con? ence interval lies at % %. B. Experiment 2: Effect of Eigenspace Transformation This experiment examined the effect of eigenspace-transformation methods on classi? er performance. The data used were the same as in Experiment 1. We applied a one-class SVM classi? er in three evaluations, with the inputs respectively set to be the original feature-distance vectors (without any transformations), the projection of feature-distance vectors by PCA, and the projection of feature-distance vectors by KPCA. Fig. 4 and Table V show the ROC curves and average FARs and FRRs for each of three feature spaces, with standard deviations in parentheses.As shown in Fig. 4 and Table V, the authentication accuracy for the feature space transformed by KPCA is the best, followed by the accuracies for feature spaces by PCA and the original one. Speci? cally, direct classi? cation in the original feature space (without transformations) produces a FAR of 15. 45% and FRR of 15. 98%. This result is not encouraging c ompared to results previously reported in the literature. However, as mentioned in Experiment 1, the samples may be subject to more behavioral variability compared with previous work, because previous work analyzed mouse behaviors over a longer period of observation.Moreover, we observe that the authentication results of % % by PCA, and % % by KPCA are much better than for direct classi? cation. This result is a demonstration of the effectiveness of the eigenspace transformation in dealing with variable behavior data. Furthermore, we ? nd that the performance of KPCA is slightly superior to that of PCA. This may be due to the nonlinear variability (or noise) existing in mouse behaviors, and KPCA can reduce this variability (or noise) by using kernel transformations [29].It is also of note that the standard deviations of FAR and FRR based on the feature space transformed by KPCA and PCA are smaller than those of the original feature space (without transformations), indicating that th e eigenspace-transformation technique enhances the stability and robustness of our approach. compared with previous work. It should be noted that our result does not yet meet the European standard for commercial biometric technology, which requires near-perfect accuracy of 0. 001% FAR and 1% FRR [10]. But it does demonstrate that mouse dynamics could provide valuable information in user authentication tasks.Moreover, with a series of incremental improvements and investigations (e. g. , outlier handling), it seems possible that mouse dynamics could be used as, at least, an auxiliary authentication technique, such as an enhancement for conventional password mechanisms. Our second observation is that our approach has substantially better performance than all other classi? ers considered in our study. This may be due to the fact that SVMs can convert the problem of classi? cation into quadratic optimization in the case of relative insuf? ciency of prior knowledge, and still maintain hig h accuracy and stability.In addition, the standard deviations of the FAR and FRR for our approach are much smaller than those for other classi? ers, indicating that our approach may be more robust to variable behavior data and different parameter selection procedures. Our third observation is that the average authentication time in our study is 11. 8 seconds, which is impressive and achieves an acceptable level of performance for a practical application. Some previous approaches may lead to low availability due to a relatively-long authentication time. However, an authentication time of 11. seconds in our study shows that we can perform mouse-dynamics analysis quickly enough to make it applicable to authentication for most login processes. We conjecture that the signi? cant decrease of authentication time is due to procedural features providing more detailed and ? ne-grained SHEN et al. : USER AUTHENTICATION THROUGH MOUSE DYNAMICS 25 TABLE VI FARs AND FRRs OF DIFFERENT SAMPLE LENGTH S Fig. 4. ROC curves for three different feature spaces: the original feature space, the projected feature space by PCA, and the projected feature space by KPCA.TABLE V FARs AND FARs FOR THREE DIFFERENT FEATURE SPACES (WITH STANDARD DEVIATIONS IN PARENTHESES) the needs of the European Standard for commercial biometric technology [10]. We ? nd that after observing 800 mouse operations, our approach can obtain a FAR of 0. 87% and a FRR of 0. 69%, which is very close to the European standard, but with a corresponding authentication time of about 10 minutes. This long authentication time may limit applicability in real systems. Thus, a trade-off must be made between security and user acceptability, and more nvestigations and improvements should be performed to secure a place for mouse dynamics in more pragmatic settings. D. Comparison User authentication through mouse dynamics has attracted growing interest in the research community. However, there is no shared dataset or baseline algor ithm for measuring and determining what factors affect performance. The unavailability of an accredited common dataset (such as the FERET database in face recognition [32]) and standard evaluation methodology has been a limitation in the development of mouse dynamics.Most researchers trained their models on different feature sets and datasets, but none of them made informed comparisons among different mouse feature sets and different results. Thus two additional experiments are offered here to compare our approach with those in the literature. 1) Comparison 1: Comparison With Traditional Features: As stated above, we constructed the feature space based on mouse clicks and mouse movements, consisting of holistic features and procedural features. To further examine the effectiveness of the features constructed in this study, we provide a comparative experiment. We chose the features used by Gamboa et al. 17], Aksari and Artuner [4], Hashia et al. [19], Bours and Fullu [8], and Ahmed a nd Traore [2], because they were among the most frequently cited, and they represented a relatively diverse set of mouse-dynamics features. We then used a one-class SVM classi? er to conduct the authentication experiment again on our same dataset with both the feature set de? ned in our study, and the feature sets used in other studies. Hence, the authentication accuracies of different feature sets can be compared. Fig. 5 and Table VII show the ROC curves and average FARs and FRRs for each of six feature sets, with standard deviations in parentheses.We can see that the average error rates for the feature set from our approach are much lower than those of the feature sets from the literature. We conjecture that this may be due to the procedural features providing ? ne-grained information about mouse behavior, but they may also be due, in part, to: (1) partial adoption of features de? ned in previous approaches C. Experiment 3: Effect of Sample Length This experiment explored the effe ct of sample length on classi? er performance, to investigate the trade-off between security (authentication accuracy) and usability (authentication time).In this study, the sample length corresponds to the number of mouse operations needed to form one data sample. Each original sample consists of 32 mouse operations. To explore the effect of sample length on the performance of our approach, we derived new datasets with different sample lengths by applying bootstrap sampling techniques [13] to the original dataset, to make derived datasets containing the same numbers of samples as the original dataset. The new data samples were generated in the form of multiple consecutive mouse samples from the original dataset. In this way, we considered classi? r performance as a function of the sample length using all bootstrap samples derived from the original dataset. We conducted the authentication experiment again (using one-class SVM) on six derived datasets, with and 800 operations. Table VI shows the FARs and FRRs at varying sample lengths, using a one-class SVM classi? er. The table also includes the authentication time in seconds. The FAR and FRR obtained using a sample length of 32 mouse operations are 8. 74% and 7. 96% respectively, with an authentication time of 11. 8 seconds. As the number of operations increases, the FAR and FRR drop to 6. 7% and 6. 68% for the a data sample comprised of 80 mouse operations, corresponding to an authentication time of 29. 88 seconds. Therefore, we may conclude that classi? er performance almost certainly gets better as the sample length increases. Note that 60 seconds may be an upper bound for authentication time, but the corresponding FAR of 4. 69% and FRR of 4. 46% are still not low enough to meet 26 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 1, JANUARY 2013 Fig. 5. ROC curves for six different feature sets: the feature set in our study, and the features sets in other studies.RESULTS OF TABLE VII CO MPARISON WITH SOME TRADITIONAL FEATURES (WITH STANDARD DEVIATIONS IN PARENTHESES) Note that this approach [2] is initially applied to intrusion detection, and we extracted parts of features closely related to mouse operations in our dataset. The reason for this decision is that we want to examine whether the features employed in intrusion detection can be used in user authentication. because of different data-collection environments; (2) using different types of thresholds on the anomaly scores; (3) using less enrollment data than was used in previous experiments.The improved performance based on using our features also indicates that our features may allow more accurate and detailed characterization of a user’s unique mouse behavior than was possible with previously used features. Another thing to note from Table VII is that the standard deviations of error rates for features in our study are smaller than those for traditional features, suggesting that our features might be more stable and robust to variability in behavior data. One may also wonder how much of the authentication accuracy of our approach is due to the use of procedural features or holistic features.We tested our method using procedural features and holistic features separately, and the set of procedural features was the choice that proved to perform better. Specifically, we observe that the authentication accuracy of % % by using the set of procedural features is much better than for the set of holistic features, which have a FAR of 19. 58% and a FRR of 17. 96%. In combination with the result when using all features, it appears that procedural features may be more stable and discriminative than holistic features, which suggests that the procedural features contribute more to the authentication accuracy.The results here only provide preliminary comparative results and should not be used to conclude that a certain set of mouse features is always better than others. Each feature set has it s own unique advantages and disadvantages under different conditions and applications, so further evaluations and comparisons on more realistic and challenging datasets are needed. 2) Comparison 2: Comparison With Previous Work: Most previous approaches have either resulted in poor performance (in terms of authentication accuracy or time), or have used data of limited size.In this section, we show a qualitative comparison of our experimental results and settings against results of previous work (listed in Table VIII). Revett et al. [34] and Aksari and Artuner [4] considered mouse dynamics as a standalone biometric, and obtained an authentication accuracy of ERR around 4% and 5. 9% respectively, with a relatively-short authentication time or small number of mouse operations. But their results were based on a small pool of users (6 users in [34] and 10 users in [4]), which may be insuf? ient to obtain a good, steady result. Our study relies on an improved user authentication methodolo gy and far more users, leading us to achieve a good and robust authentication performance. Ahmed and Traore [2] achieved a high authentication accuracy, but as we mentioned before, it might be dif? cult to use such a method for user authentication since the authentication time or the number of mouse operations needed to verify a user’s identity is too high to be practical for real systems. Additionally, Hashia et al. 19] and Bours and Fulla [8] could perform user authentication in a relatively-short time, but they reported unacceptably high error rates (EER of 15% in [19], and EER of 26. 8% in [8]). In our approach we can make an authentication decision with a reasonably short authentication time while maintaining high accuracy. We employ a one-class classi? er, which is more appropriate for mouse-dynamics-based user authentication. As mentioned in Experiment 3, we can make an authentication decision in less than 60 seconds, with corresponding error rates are FAR of 4. 9% and FRR of 4. 46%. Although this result could be improved, we believe that, at our current performance level, mouse dynamics suf? ce to be a practical auxiliary authentication mechanism. In summary, Comparison 1 shows that our proposed features outperform some traditional features used in previous studies, and may be more stable and robust to variable behavior data. Comparison 2 indicates that our approach is competitive with existing approaches in authentication time while maintaining high accuracy.More detailed statistical studies on larger and more realistic datasets are desirable for further evaluations. VIII. DISCUSSION AND EXTENSION FOR FUTURE WORK Based on the ? ndings from this study, we take away some messages, each of which may suggest a trajectory for future work. Additionally, our work highlights the need for shared data and resources. A. Success Factors of Our Approach The presented approach achieved a short authentication time and relatively-high accuracy for mouse-dynami cs-based user SHEN et al. : USER AUTHENTICATION THROUGH MOUSE DYNAMICS 27 TABLE VIII COMPARISON WITH PREVIOUS WORKAuthentication time was not explicitly reported in [4], [8], [17]; instead, they required the user to accomplish a number of mouse operations for each authentication (15 clicks and 15 movements for [17]; 10 clicks and 9 movements for [4]; 18 short movements without pauses for [8]). Authentication time was not explicitly stated in [2]; however, it can be assumed by data-collection progress. For example, it is stated in [2] that an average of 12 hours 55 minutes of data were captured from each subject, representing an average of 45 sessions. We therefore assume that average session length is 12. 5 60/45 17. 22 minutes 1033 seconds. authentication. However, it is quite hard to point out one or two things that may have made our results better than those of previous work, because (1) past work favored realism over experimental control, (2) evaluation methodologies were incons istent among previous work, and (3) there have been no public datasets on which to perform comparative evaluations. Experimental control, however, is likely to be responsible for much of our success. Most previous work does not reveal any particulars in controlling experiments, while our work is tightly controlled.We made every effort to control experimental confounding factors to prevent them from having unintended in? uence on the subject’s recorded mouse behavior. For example, the same desktop computer was used for data collection for all subjects, and all system parameters relating to the mouse were ? xed. In addition, every subject was provided with the same instructions. These settings suggest strongly that the differences in subjects were due to individually detectable mouse-behavior differences among subjects, and not to environmental variables or experimental conditions.We strongly advocate the control of potential confounding factors in future experiments. The reaso n is that controlled experiments are necessary to reveal causal connections among experimental factors and classi? er performance, while realistic but uncontrolled experiments may introduce confounding factors that could in? uence experimental outcomes, which would make it hard to tell whether the results of those evaluations actually re? ect detectable differences in mouse behavior among test subjects, or differences among computing environments.We had more subjects (37), more repetitions of the operation task (150), and more comprehensive mouse operations (2 types of mouse clicks, 8 movement directions, and 3 movement distance ranges) than most studies did. Larger subject pools, however, sometimes make things harder; when there are more subjects there is a higher possibility that two subjects will have similar mouse behaviors, resulting in more classi? cation errors. We proposed the use of procedural features, such as the movement speed curve and acceleration curve, to provide mor e ? egrained information about mouse behavior than some traditional features. This may allow one to accurately describe a user’s unique mouse behavior, thus leading to a performance improvement for mouse-dynamics-based user authentication. We adopted methods for distance measurement and eigenspace transformation for obtaining principal feature components to ef? ciently represent the original mouse feature space. These methods not only overcome within-class variability of mouse behavior, but also preserve between-class differences of mouse behavior. The improved authentication accuracies demonstrate the ef? acy of these methods. Finally, we used a one-class learning algorithm to perform the authentication task, which is more appropriate for mousedynamics-based user authentication in real applications. In general, until there is a comparative study that stabilizes these factors, it will be hard to be de? nitive about the precise elements that made this work successful. B. Oppor tunities for Improvement While previous studies showed promising results in mouse dynamics, none of them have been able to meet the requirement of the European standard for commercial biometric technology.In this work, we determined that mouse dynamics may achieve a pragmatically useful level of accuracy, but with an impractically long authentic

Psychological Association Essay

The code, first published in 1953, is applicable to psychologists of all categories though various principles are mostly relevant to clinical psychologists in their activities of research, teaching, assessment and therapy. The objective of these codes is to instill ethical behavior among psychologists. The code is categorized into two groups namely: Ethical standards; It encompasses rules that are enforceable and specific covering a great deal of activities performed by psychologists. Ethical standards are further categorized into 10 groups with a sum total of 89 standards. They include; Impact of the APA code of ethics to psychology The field has mostly committed people who have a far greater motivation for doing their work other than material wellbeing. This stems from observing the virtue that proclaims that psychologists should not harm clients but strive to benefit them. Keenness and high levels of professionalism are more pronounced in the field due to the fact that accuracy and truthfulness is one of the guiding principles for psychologists. The principle stressing for forging of close friendships between psychologists and their clients has the likely effect of speeding up the recovery of clients. This is because one major reason why clients see psychologists is due to problems associated with neglect and loneliness (Lane, Meisels, 1994, p. 34). The public has more trust in psychologists because they are assured of the fact that their confidential information is safely guarded. The chances of a client opening up to a psychologist are therefore high. This in turn makes diagnosis and therapy more effective due to the availability of accurate information. The fact that psychologists happen to be calm and composed people makes the atmosphere around an examination room relaxing. This in turn makes the client who might be inclined to overexcitement also composed. Therapy and examination is thus greatly simplified. The existence of a universally accepted code for the discipline makes it easier to compare notes among scholars from different backgrounds. This in turn makes the synchronization of activities easier and hence connecting of scholars from different parts of the globe. Sharing of ideas is thus enhanced with the ultimate result of improving the quality of content in the discipline (Lane, Meisels, 1994, p. 56) Reference MchWhirter Darien (1995) Equal Protection. New York: Oryx Press, pp. 23, 78 Lane Robert & Meisels Murray (1994) A History of the Division of Psychoanalysis of the American Psychological Association. New York: Lawrence Erlbaum Associates, pp. 34, 56 .

Monday, July 29, 2019

Investigate and analyse the financial system of South Korea, Its level Essay

Investigate and analyse the financial system of South Korea, Its level of development,The efficiency of its financial markets,an - Essay Example South Korea established a central bank in 1950 that was given the mandate of regulating all the other banks in the country, printing and circulation of the currency in South Korea as well as making laws and regulations that would govern other financial institutions in the country. The minor banks in South Korea had a function of extending credit services to businesses and other medium and long term investment projects (pg 48). Today, the financial system of South Korea has grown and continues to improve remarkably over the years. South Korea is located in the north-eastern Asia and it is bordered by the Yellow sea to the west and Democratic Republic of Korea to the north. South Korea has four distinct seasons and in 2011, the population was estimated to be 48.75 million people with the annual growth rate in population estimated to be 0.23%. South Korea is characterized by low birth rate and high life expectancy at an average of 82 years for women and 75 years for men but the literacy levels are high with compulsory schooling for the first 9 years. This has greatly been affecting the economy of South Korea because most of the population is made up of the old people. The major religions in South Korea are Christianity, Buddhhism, Shamanism, Confucianism and Chondogyo. Politically, South Korea has a well organized government by the president, the parliament and the judiciary. Power was well laid out in the constitution that was appealed in 1987 (Kim & Black, 2004). South Korea has had a well performing investment sector especially in agriculture and other medium and long-term investments. This sector has been an integral part of the economy of South Korea and the banks even offered loans that would be channeled towards these businesses. They contributed to the growth of South Korea’s GDP that has improved though it had staggered for some time due to hard economic times that South Korea went through for some time (Lau, 1996). The depository sector of the fin ancial system has also been improved over the years where unlike the times when banks were solely owned by the government, the people have been allowed the freedom to have shares in the banking sector and the banks have started offering depository services for their customers (Lee, 2004). This has strengthened the financial system of South Korea and has ensured that there is constant growth in the sector. In the recent years, South Korea’s financial system moved from the government ownership to more widespread powers where people were allowed to participate directly through the purchase of shares. This was contributed to by the increases reforms and strategies that were geared towards attainment of stability in the financial markets. Over the past 10 years, The GDP of South Korea has experience fluctuations in GDP with a 9% growth in 2009 and 6.1% being recorded in 2010. This was due to changes in economic situations globally and changes in the level of exports in the country over the years. South Korea’s financial system has improved significantly and it has even gone ahead to sign business agreements with North Korea that are aimed at improving the exports in the country hence ensuring the country a growth in the GDP and Foreign Direct Investment (FDI) (Zahid,1995). The growth of the financial institutions in South Korea have been so much affected by the aging population, strict labor laws, poor management of the institutions, underdevelopment of the

Sunday, July 28, 2019

Measuring the Effectiveness of the Forum for Youth Investment Program Article

Measuring the Effectiveness of the Forum for Youth Investment Program - Article Example However, the perception of the potential outcome of such initiatives, as well as the prospect of having the projects realize the aforesaid interest needs to be validated. This interest remains best defined via the presentation of a possible avenue upon which the program may be evaluated. A possible path towards the realization of this interest anticipates the consideration of several critical principles. Such statutory guidelines offer an insight into the potential of the program in achieving the considered intent. Programs such as Forum for Youth Investment need to be vetted in order to be allowed to gauge the effectiveness of the initiatives on the ground. They need to express an ultimate potential or capacity of undertaking their principle agenda exponentially. This evaluation seeks to detail on this concern with the hope of presenting a reliable image of the potential of the Forum for Youth Investment. The evaluation hopes to be able to propose methods and measures that may allow for the reflection of the abilities accorded to the program This is deemed to be of essential merit to the initial developers that sought to use the program for the evaluation of their ideas (Yohalem & Wilson-Ahlstrom, 2009, 16). Additionally, the findings will be of benefit to the youths since they will offer useful information on the available vetting programs. Having the Forum for Youth Investment unevaluated allows for the reduction of its potential. The design will nest its focus on the principle structure of the program. This will espouse a top-down approach. The evaluation will consider the output of the program. This will form the base upon which to evaluate the adopted procedure and protocols. The evaluation will simply seek to identify the possibility of achieving the noted result from the adopted items of the check. The program has four levels of outcome.

Saturday, July 27, 2019

History of the Japanese in North America Essay Example | Topics and Well Written Essays - 1000 words

History of the Japanese in North America - Essay Example People from Japan began migrating to the U.S. in significant numbers following the political, cultural, and social changes stemming from the 1868 Meiji Restoration. Particularly after the Chinese Exclusion Act of 1882, Japanese immigrants were sought by industrialists to replace the Chinese immigrants. In 1907, the "Gentlemen's Agreement" between the governments of Japan and the U.S. ended immigration of Japanese workers (i.e., men), but permitted the immigration of spouses of Japanese immigrants already in the U.S. The Immigration Act of 1924 banned the immigration of all but a token few Japanese. The ban on immigration produced unusually well-defined generational groups within the Japanese American community. Initially, there was an immigrant generation, the Issei, and their U.S.-born children, the Nisei. The Issei were exclusively those who had immigrated before 1924. Because no new immigrants were permitted, all Japanese Americans born after 1924 were--by definition--born in the U.S. This generation, the Nisei, became a distinct cohort from the Issei generation in terms of age, citizenship, and language ability, in addition to the usual generational differences. Institutional and interpersonal racism led many of the Nisei to marry other Nisei, resulting in a third distinct generation of Japanese Americans, the Sansei. Significant Japanese immigration did not occur until the Immigration Act of 1965 ended 40 years of bans against immigration from Japan and other countries. The Naturalization Act of 1790 restricted naturalized U.S. citizenship to "free white persons," which excluded the Issei from citizenship. As a result, the

Friday, July 26, 2019

Live Performance Review Essay Example | Topics and Well Written Essays - 750 words

Live Performance Review - Essay Example Currently, The Blind Tiger stands out as the best live music venue in the state of North Carolina. It is committed to bring out the best out of regional, local and national music. Almost all the local talents that reside in Greensboro attribute the exposure and discovery of their talent to the Tiger (Coston 56). With twenty five years of supporting live music, the club anticipates to remain indisputable in offering the best entertainment in the region. The summer Breeze Concert was conducted by the Jazz Revolution band. The band consisted of Letron Brantley on saxophone and flute, Mark Catoe on acoustic Piano, Wilbur Thompson on acoustic Electric Bass and Upright, Kristin Randals as lead vocalist, Adam Snow on the drums and Mayhue Bostic on the guitar. The six piece jazz ensemble performed on a small stage at the one of the ends of the Tiger club. The concert was set in a small and intimate space to provide the best setting for jazz hearing. Half of the stage is taken by a baby grand piano. The band performed a mixture of modal and hard bop jazz. Just like any other genre of music, jazz music entails the telling of a story (Ake et al 2010). The jazz revolution band collectively performed ten of their pieces; however, only four of them were different in terms of style, allowing the audience to get diversified sounds of jazz music. The band performed â€Å"Fly Me to the Moon â€Å", an upbeat standard with a consonance, Latin –inspired, at the beginning of the song that set out the romantic mood, initially. A saxophone-piano served as an alteration to the song’s melody, creating an impression of two lovers flying to the moon. The rhythm of the song was initially steady but fastened up as the song was approaching its climax. With an increased passion for the song, the texture had a new twist as the saxophone carried on the melody while the piano and other instruments accompanied. At the climax, the dynamics of the song seemed to get

Thursday, July 25, 2019

Reaction paper 2 Essay Example | Topics and Well Written Essays - 500 words - 1

Reaction paper 2 - Essay Example Ray Eddy resorts to conduct illegal business after her husband left with all the family savings. Eddy and her two children no longer can survive on the meagre wages she gets from her stores. She meets Lila, who lately, has been in the business of smuggling immigrants. The two movies have remarkable contrasts. The essay illustrates key differences evident in the two films. The first difference between the two films relates to their production. Karate Kid is a Hollywood film. The production was courtesy of Sony Pictures, which makes it a major studio film (Horn 1). Frozen river, on the contrary, was a production of the Cohen Media Group company. Other companies credited for the film’s production are Harwood Hunt Productions and Off Hollywood Pictures. Its run time is 98 minutes. That is contrary to the Karate Kid that has a runtime of 2 hours 20 minutes. There is a remarkable difference in the manner in which viewers are represented in the two films. That explains the differences in which eurocentrism operates in the two films. Eurocentrism relates to perceptions of exceptionalism of Europe that developed to a worldview after western civilization. Hollywood, over a long time, has promoted the concept of Eurocentrism in most movies and films. In essence, such films depict the perceived supremacy that Europe holds. Eurocentrism is evident in the Karate Kid depicted through its characters. It implies that common notion of superiority evident in Dre to learn and compete with peers, who were initially superior. That emphasizes eurocentrism being that the two characters, Dre and Wang are from different regions. That differs from the depiction of characters in the Frozen River. The film does not reveal Eurocentrism and aspects of superiority. There is difference in the manner in which non-White main characters in the two films

Wednesday, July 24, 2019

Assignment Example | Topics and Well Written Essays - 250 words - 60

Assignment Example people are satisfied and contented with what they have at present and the concepts of working hard and acquiring more is no longer a driver to progress. Where as the dynamics of Chinese society is highly progressive and competitive, where individuals wants to have more in less time and they work hard for that. This demanding attitude for more work of china as a nation makes it prime target for investment and hence becomes the economy and growth successful. The second and most important reason for its exponential expansion in future is the improved qualification of its workforce. The rapid increase in the enrolment of students in all types of educational institutions such as 100% enrolment in high school and 50 % in colleges shows that soon in future these educated work forces will replace the former illiterate labour. This transition will take place in almost every sector, therefore substantially increasing the productivity of the

US foreign policy during Cold War Thesis Example | Topics and Well Written Essays - 5000 words

US foreign policy during Cold War - Thesis Example In investigating facts the writer does a remarkable work; though, his expose unearths the dark side of the US in regard to the use of tainted informers and henchmen against it arch-rival superpower. Simpson’s careful conclusions, nonetheless, ruffle some feathers. One would not recognize easily from his narration that the Eastern European power was to blame for the start of the conflict, or that Western countries had any genuine concern following the containment of social liberty in the region. The use of ex-Nazi officials by the United States in the Cold War against the Soviet resulted in a â€Å"blowback† effect back in the country as it triggered more socio-economic and political challenges in the country. The sharp analysis of the role played by American officials relates to everyone; the most prominent one including Truman, the Dulles kin, Eisenhower, and George Kennan as well as the many personalities in the key intelligence and national-security organs. These age ncies and individuals are believed to have carried out the murky work, involving the brand of falsity, distrust, amorality, and zealotry with the potential of the Soviet threat. The â€Å"blowback† effects, amount to â€Å"the unintended consequences of U.S. foreign policies† during Cold War. ... e amalgamation of muckraking operations and historical evaluations takes care of one factor of the narration given by the author: the jostling for influence among the key allied states to cage and stamp their authorities for national significance exposed the researchers who had played pivotal roles in the empowerment of Hitler's war machine. In regard to their natural accomplishments, which the writer explains exhaustively, it is normal that many troops, who fought on the side of the Allied countries, were keen on their skills and in consolidating it to the disadvantage of the enemies than their historical accounts. Whereas the issue of national interest was legitimate, in imp lementation the end justified the most insignificant of means. Tenets of the American policy The fundamental principle American policymakers employed after the Second World War to incorporate ex-Nazis and informers was the likelihood or the inevitable occurrence of a fresh conflict pitting the two superpowers, the United States and the USSR. The expectation of the United States of a long-standing conflict was aggravated by the geopolitical hostilities between European powers and some Asian powers immediately after 1945; by the lack of consistent details on the actual situation in the East; and commonly by spiritual regulations that emphasized that Communism amounted to Satanism. Such observations differed across societies; however, they amounted to a significant phenomenon. The real weighing of triggering factors in Europe in the mid-twentieth century, however, implied that neither of the two world superpowers had the capacity to stamp its unilateral authority in the face of another through the use of military might only.

Tuesday, July 23, 2019

Analysis of Coronary Artery Disease Assignment Example | Topics and Well Written Essays - 1250 words - 1

Analysis of Coronary Artery Disease - Assignment Example Therefore, any disorder or malfunctioning in the coronary arteries may lead to  a serious cut off in the flow of oxygen and minerals to the heart leading to an imbalance between supply and demand of  oxygen, which is life-threatening because  the heart  is  the  pump of blood circulation, which supply oxygen to all organs. Atherosclerosis is the chief causes of  coronary artery diseases (CAD) which cause changes in structure as well functionality of blood vessels.  It is the process in which, progressive dumping of cholesterol and other fatty materials across the arterial wall occurs. These dumping results in a contraction of the lumen i.e stenosis, which restricts blood flow. Further, spasm, birth defect, lupus, arthritis, blood clogging are few other causes apart from atherosclerosis. Ten years ago, CAD is thought to be a disorder of men. Generally, CAD occurs a decade earlier in men than women,  up to the time of menopause, because a high level of estrogen  pro tects women from CAD. Anyhow, after menopause, it happens more frequently in women in comparability to men. It is noticed that ratio of women suffering from CAD is higher than men in the age group of or beyond 75. CAD is assumed to be the leading  lifer taker in developed countries. Studies imply about 5-9% of people aged 20+ are suffering from CAD. The death rate rises with age, and it is more common in males in comparison to females,  but the  death rates for men decrease sharply after the age of 55 and finally after aged 75. The death rate of women is higher than men, who are of the same age.  Ã‚  It is estimated that +16 million Americans are suffering from CAD and 8 million of them had a myocardial infarction (increased 1 Million per annum). Framingham trial predicts approximately 50% & 30% of males and females respectively in the age 40+ population are suffering through CAD (Helen H, and Munther K).

Monday, July 22, 2019

Trip Report Essay Example for Free

Trip Report Essay INTRODUCTION Where: Tokyo Japan When: 23 January 2008-28 January 2008 Why: Communication Exhibition (Technology Scout) What Next: Company Application of the Learned Innovations The company RUNC TELCOM is a joint venture of network products in the field of information and communication technology. It specializes in producing and dealing with the quality products of telecommunication equipment accessories and network products. RUNC INC is committed to providing quality products and services to customers and is an ISO9001 quality accredited company. DISCUSSION During the conference, we have been able to observe how a state-of-the-art SONY P300 Cellular Phone works. Likely, the process of the production of the said piece of technological art has lead us to a conclusion that technology today is certainly not only catering to the needs of the society but also to the wants of the majority in the human population. The said communication gadget comprises of different features that suits the modernized human community today. It is primarily video conference capable, which could easily connect to the Internet for connection purposes. As the producer of the product, Miyoko Sony Company has mentioned that the said gadget is indeed one of a kind that could be well developed by other companies as they wish to do so. As for the expenses that were consumed during the exhibit, the breakdown shall be presented below: Rental Car 300.00, food and hotel 800.00 Conference room rental 400.00 CONCLUSION AND RECOMMENDATION   Ã‚  Ã‚  Ã‚   As noted earlier, the exhibit aimed to introduce a new way of treating modern systems of communication through introducing SONY P3500 Cellular Phone to the society. This is the primary reason why it is suggested that RUNC TELCOM be able to grasp the important implication of the said exhibit within its system so as to forward towards progression. As recommended, the RUNC TELECOM is proposed to consider following suggestions: Locate and Buy thin communicators [Cellular Phones] that are innovatively capable of VTC connection that is cheaper that any phone in the market today. Find and exhibit its features to the market and lure attract them in buying the product thus becoming one of the companies controlling the cellular industry today.

Sunday, July 21, 2019

Viscoplasticity and Static Strain Ageing

Viscoplasticity and Static Strain Ageing Viscoplasticity Inelastic deformation of materials is broadly classified into rate independent plasticity and rate dependent plasticity. The theory of Viscoplasticity describes inelastic deformation of materials depending on time i.e. the rate at which the load is applied. In metals and alloys, the mechanism of viscoplasticity is usually shown by the movement of dislocations in grain [21]. From experiments, it has been established that most metals have tendency to exhibit viscoplastic behaviour at high temperatures. Some alloys are found to exhibit this behaviour even at room temperature. Formulating the constitutive laws for viscoplasticity can be classified into the physical approach and the phenomenological approach [23]. The physical approach relies on the movement of dislocations in crystal lattice to model the plasticity.  Ã‚   In the phenomenological approach, the material is considered as a continuum. And thus   the microscopic behaviour can be represented by the evolution of certain int ernal variables instead. Most models employ the kinematic hardening and isotropic hardening variables in this respect. Such a phenomenological approach is used in this work too. According to the classical theory of plasticity, the deviatoric stresses is the main contribu- tor to the yielding of materials and the volumetric or hydrostatic stress does not influence the inelastic behaviour. It also introduces a yield surface to differentiate the elastic and plastic domains. The size and position of such a yield surface can be changed by the strain history, to model the exact stress state. The theory of viscoplasticity differs from the plasticity theory, by employing a series of equipotential surfaces. This helps define an over-stress beyond the yield surface. The plastic strain rate is given by the viscoplastic flow rule. To model the hardening behaviour, introduction of several internal variables is necessary. Unlike strain or temperature which can be measured to asses the stress state, internal variable or state variables are used to capture the material memory by means of evolution equations. This must include a tensor variable to define the kinematic harden ing and a scalar variable to define the isotropic variable. The evolution of these internal variables allows us to define the complete hardening behaviour of materials. In this work we consider only the small strain framework. The basic principles of viscoplasticity are similar to those from Plasticity theory. The main difference is the introduction of time effects. Thus the concepts from plasticity and the introduction of time effects to describe viscoplasticity, as summarised by Chabocheand Lemaitre[21] are discussed in this chapter. Basic principles Considering small strains framework, the strain tensor can be split into its elastic and inelastic parts ÃŽ µ = ÃŽ µe+ ÃŽ µin(2.1) where ÃŽ µ is total strain, ÃŽ µe is the elastic strain and ÃŽ µin is the inelastic strain. In this work, we neglect creep and thus consider only the plastic strain to be the inelastic strain. Hence we can proceed to rewrite the above equation as : ÃŽ µ = ÃŽ µe+ ÃŽ µp(2.2) where ÃŽ µp is the plastic strain. Let us consider a field with stress ÏÆ' = ÏÆ'i j(x) and external volume forces fi. Thus the equilibrium condition is given as: ∂ÏÆ'i j + f ∂xii = 0;i, jÃŽ µ {1,2,3} (2.3) From the balance of moment of momentum equation, we know that the Cauchy stress ten- sor is symmetric in nature. The strain tensor is calculated from the gradient of displacement, uas: 1 .∂uj∂ui. ÃŽ µi j = 2 ∂xi + ∂x (2.4) The Hookes law for the relation between stress and strain tensors is given using the elastic part of the strain: ÏÆ' = E · ÃŽ µe(2.5) where ÃŽ µe and the stress ÏÆ' are second order tensors. E is the fourth order elasticity tensor. Equipotential surfaces In the traditional plasticity theory which is time independent, the stress state is governed by a yield surface and loading-unloading conditions. In Viscoplasticity the time or rate dependent plasticity is described by a series of concentric equipotential surfaces. The location on the centre and its size determine the stress state of a given material. Fig. 2.1 Illustration of equipotential surfaces from [21] It can be understood that the inner most surface or the surface closest to the centre represents a null flow rate(à ¢Ã¢â‚¬Å¾Ã‚ ¦ = 0). As shown in Figure (2.1), the outer most and the farthest surface from the centre represents infinite flow rate (à ¢Ã¢â‚¬Å¾Ã‚ ¦ = ∞). These two surfaces represent the extremes governed by the time independent plasticity laws. The region in between is governed by Viscoplasticity[21]. The size of the equipotential surface is proportional to the flow rate. Greater the flow, greater is the surface size. The region between the centre and the inner most surface is the elastic domain. Flow begins at this inner most surface( f=0). In Viscoplasticity, there are two types of hardening rules to be considered: (i) Kinematic hardening and (ii) isotropic hardening. The Kinematic hardening describes the movement of the equipotential surfaces in the stress plane. From material science, this behaviour is known to be the result of dislocations accumulating at the barriers. Thus it helps in describing the Bauschinger effect [27] which states that when a material is subjected to yielding by  Ã‚  Ã‚   a compressive load, the elastic domain is increased for the consecutive tensile load. This behaviour is represented by ÃŽ ± which does not evolve continuously during cyclic loads and thus fails to describe cyclic hardening or softening behaviours. A schematic representation is shown in Fig.(2.2). Fig. 2.2 Linear Kinematic hardening and Stress-strain response from [11] The isotropic hardening on the other hand describes the change in size of the surface and assumes that the centre and shape remains unchanged. This behaviour is due to the number of dislocations in a material and the energy stored in it. It is represented by variable r, which evolves continuously during cyclic loadings. This can be controlled by the recovery phase. As a result, isotropic behaviour is helpful is modelling the cyclic hardening and softening phenomena. A schematic representation is shown in Fig.(2.3). Fig. 2.3 Linear Isotropic hardening and Stress-strain response from [11] From Thermodynamics, we know the free energy potential(ψ ) to be a scalar function [21]. With respect to temperature T, it is concave. But convex with respect to other internal variables. Thus, it can be defined as : ψ= ψ.   ,T,ÃŽ µe,ÃŽ µp,Vk.(2.6) where ÃŽ µ,Tare the only measured quantities that can help model plasticity. Vkrepresents the set of internal variable, also known as state variables which help define the memory of the previous stress states. In Viscoplasticity, it is assumed that ψ depends only on ÃŽ µe,T,Vk. Thus we have: ψ= ψ.   e,T,Vk.(2.7) According to thermodynamic rules, stress is associated with strain and the entropy with temperature. This helps us define the following relations: ÏÆ' = Ï  . ∂ψ. ∂Î µe ,s = − .∂ψ. ∂T (2.8) where Ï  is density and s is entropy. It is possible to decouple the free energy function and split it into the elastic and plastic parts. ψ= ψe.   e,T.+ ψp.   ,r,T.(2.9) Similar to ÏÆ', the thermodynamic forces corresponding to ÃŽ ± and r is given by: X = Ï  .∂ψ. ∂Î ± ,R = Ï  .∂ψ. ∂r (2.10) Here we have X the back stress tensor, used to measure Kinematic hardening. It is noted as a Kinematic hardening variable which defines the position tensor of the centre of equipotential surface. Similarly Ris the Isotropic hardening variable which governs the size of the equipotential surface. Dissipation potential The equipotential surfaces that describe Viscoplasticity have some properties. Points on each surface have a magnitude equal to the strain rate. Points on each surface have the same dissipation potential. If potential is zero, there is no plasticity and it refers to the elastic domain. The dissipation potential is represented by à ¢Ã¢â‚¬Å¾Ã‚ ¦ which is a convex function. It can be defined in a dual form as: à ¢Ã¢â‚¬Å¾Ã‚ ¦ = à ¢Ã¢â‚¬Å¾Ã‚ ¦.   ,X,R; T,ÃŽ ±,r.(2.11) It is a positive function and if the variables ÏÆ',X,Rare zero, then the potential is also zero. The normalityrule, defined in [22] suggests that the outward normal vector is proportional to the gradient of the yield function. Applying the normality rule, we may obtain the following relations: ∂à ¢Ã¢â‚¬Å¾Ã‚ ¦ ÃŽ µÃƒâ€¹Ã¢â€ž ¢Ã‚   p = ∂ÏÆ', ÃŽ ±Ãƒâ€¹Ã¢â€ž ¢   = ∂à ¢Ã¢â‚¬Å¾Ã‚ ¦ , ∂X ∂ à ¢Ã¢â‚¬Å¾Ã‚ ¦ rËâ„ ¢ = ∂R (2.12) Considering the recovery effects in Viscoplasticity, the dissipation potential can be split into two parts: à ¢Ã¢â‚¬Å¾Ã‚ ¦ = à ¢Ã¢â‚¬Å¾Ã‚ ¦p+ à ¢Ã¢â‚¬Å¾Ã‚ ¦r(2.13) where à ¢Ã¢â‚¬Å¾Ã‚ ¦p is the Viscoplastic potential and à ¢Ã¢â‚¬Å¾Ã‚ ¦r   the recovery potential which are defined as : à ¢Ã¢â‚¬Å¾Ã‚ ¦p=à ¢Ã¢â‚¬Å¾Ã‚ ¦p..− X. − R− k,X,R; T,ÃŽ ±,r. ,(2.14) à ¢Ã¢â‚¬Å¾Ã‚ ¦r=à ¢Ã¢â‚¬Å¾Ã‚ ¦r.   ,R; T,ÃŽ ±,r.(2.15) .3 J2 . . †²Ã¢â‚¬ ².†²Ã¢â‚¬ ² ÏÆ'− X=2  Ã‚   ÏÆ'− X:  Ã‚   ÏÆ' − X (2.16) where J2 .− X. refers to the norm on the stress plane and kis the initial yield or the initial size of equipotential surface. Going back to the relation in (2.12) , we have: ∂J2 . X. ÏÆ'†² − X †² ÏÆ' ∂à ¢Ã¢â‚¬Å¾Ã‚ ¦Ã¢Ë†â€šÃƒ ¢Ã¢â‚¬Å¾Ã‚ ¦ ÃŽ µÃƒâ€¹Ã¢â€ž ¢Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   == 3 =pËâ„ ¢ (2.17) p∂ÏÆ'∂J2 . .∂ÏÆ' 2ÏÆ'− X. Here, p is the accumulated viscoplastic strain, given by : .2 pËâ„ ¢Ã‚  Ã‚   = ÃŽ µÃƒâ€¹Ã¢â€ž ¢Ã‚   p : ÃŽ µÃƒâ€¹Ã¢â€ž ¢p(2.18) 3 Also applying the normality rule on eq. (2.15) we may define r as : rËâ„ ¢ = pËâ„ ¢ − ∂à ¢Ã¢â‚¬Å¾Ã‚ ¦r(2.19) ∂R Thus when recovery is ignored (i.e à ¢Ã¢â‚¬Å¾Ã‚ ¦r = 0), r is equal to p. Perfect viscoplasticity Let us consider pure viscoplasticity where hardening is ignored. Thus the internal variables may also be removed. à ¢Ã¢â‚¬Å¾Ã‚ ¦ = à ¢Ã¢â‚¬Å¾Ã‚ ¦. ,T.(2.20) Since plasticity is independent of volumetric stress, we may consider just the deviatoric stress ÏÆ' †² = ÏÆ' − 1 tr(ÏÆ')I. Using isotropic property, we may just use the second invariant of ÏÆ' †². Thus: à ¢Ã¢â‚¬Å¾Ã‚ ¦ = à ¢Ã¢â‚¬Å¾Ã‚ ¦. (ÏÆ' ),T.(2.21) Applying the normality rule here, we may obtain the flow rule for Viscoplasticity. ∂à ¢Ã¢â‚¬Å¾Ã‚ ¦3∂à ¢Ã¢â‚¬Å¾Ã‚ ¦ÃÆ'†² ÃŽ µÃƒâ€¹Ã¢â€ž ¢Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   == (2.22) p∂ÏÆ' 2 ∂J2 .ÏÆ'. J2 .ÏÆ'. From the Odqvists law [12], the dissipation potential for perfect viscoplasticity can be obtained. Here the elastic part is ignored. Thus we have: ÃŽ » à ¢Ã¢â‚¬Å¾Ã‚ ¦ = n + 1 .J2(ÏÆ').n+1 ÃŽ » (2.23) where ÃŽ » and n are material parameters. Using this relation in the flow rule from eq.(2.22), we get: .J2(ÏÆ').nÏÆ'†² 3 ÃŽ µÃƒâ€¹Ã¢â€ž ¢Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   = p2ÃŽ » J2 . . (2.24) Further the elasticity domain can be included through the parameter kwhich is a measure of the initial yield: 3 ÃŽ µÃƒâ€¹Ã¢â€ž ¢Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   = .J2(ÏÆ') − k.nÏÆ'†² (2.25) p2ÃŽ » J2 . . The are the Macauley brackets defined by : à ¢Ã… ¸Ã‚ ¨Fà ¢Ã… ¸Ã‚ © = F · H(F),H(F) = .1   ifF0 (2.26) 0   ifF