Mansoureh Labafniya; Shahram Etemadi Borujeni
Abstract
There are many different ways of securing FPGAs to prevent successful reverse engineering. One of the common forms is obfuscation methods. In this paper, we proposed an approach based on obfuscation to prevent FPGAs from successful reverse engineering and, as a result, Hardware Trojan Horses (HTHs) insertion. ...
Read More
There are many different ways of securing FPGAs to prevent successful reverse engineering. One of the common forms is obfuscation methods. In this paper, we proposed an approach based on obfuscation to prevent FPGAs from successful reverse engineering and, as a result, Hardware Trojan Horses (HTHs) insertion. Our obfuscation method is using ConFiGurable Look Up Tables (CFGLUTs). We suggest to insert CFGLUTs randomly or based on some optional parameters in the design. In this way, some parts of the design are on a secure memory, which contains the bitstream of the CFGLUTs so that the attacker does not have any access to it. We program the CFGLUTs in run-time to complete the bitstream of the FPGA and functionality of the design. If an attacker can reverse engineer the bitstream of the FPGA, he cannot detect the design because some part of it is composed of CFGLUTs, which their bitstream is on a secure memory. The first article uses CFGLUTs for securing FPGAs against HTHs insertion, which are results of reverse engineering. Our methods do not have any power and hardware overhead but 32 clock cycles time overhead.
Elham Serkani; Hossein Gharaee Garakani; Naser Mohammadzadeh
Abstract
Abstract- With the advancement and development of computer network technologies, the way for intruders has become smoother; therefore, to detect threats and attacks, the importance of intrusion detection systems (IDS) as one of the key elements of security is increasing. One of the challenges of intrusion ...
Read More
Abstract- With the advancement and development of computer network technologies, the way for intruders has become smoother; therefore, to detect threats and attacks, the importance of intrusion detection systems (IDS) as one of the key elements of security is increasing. One of the challenges of intrusion detection systems is managing of the large amount of network traffic features. Removing unnecessary features is a solution to this problem. Using machine learning methods is one of the best ways to design an intrusion detection system. Focusing on this issue, in this paper, we propose a hybrid intrusion detection system using the decision tree and support vector machine (SVM) approaches. In our method, the feature selection is initially done by the C5.0 decision tree pruning, and then the features with the least predictor importance value are removed. After removing each feature, the least square support vector machine (LS-SVM) is applied. The set of features having the highest surface area under the Receiver Operating Characteristic (ROC) curve for LS-SVM are considered as final features. The experimental results on two KDD Cup 99 and UNSW-NB15 data sets show that the proposed approach improves true positive and false positive criteria and accuracy compared to the best prior work.
E. Shakeri; Sh. Ghaemmaghami
Abstract
The aim of image steganalysis is to detect the presence of hidden messages in stego images. We propose a blind image steganalysis method in Contourlet domain and then show that the embedding process changes statistics of Contourlet coefficients. The suspicious image is transformed into Contourlet space, ...
Read More
The aim of image steganalysis is to detect the presence of hidden messages in stego images. We propose a blind image steganalysis method in Contourlet domain and then show that the embedding process changes statistics of Contourlet coefficients. The suspicious image is transformed into Contourlet space, and then the statistics of Contourlet subbands coefficients are extracted as features. We use absolute Zernike moments and characteristic function moments of Contourlet subbands coefficients of the image to distinguish between the stego and non-stego images. Absolute Zernike moments are used to examine the randomness in the test image and characteristic function moments of Contourlet coefficients is used to form our feature set that can catch the changes made to the histogram of Contourlet coefficients. These features are fed to a nonlinear SVM classifier with an RBF kernel to distinguish between cover and stego images. We show that the embedding process distorts statistics of Contourlet coefficients, leading to detection of stego images. Experimental results confirm that the proposed features are highly sensitive to the change made by the embedding process. These results also reveal advantage of the proposed method over its counterpart steganalyzers, in cases of five popular JPEG steganography techniques.
Farnoush Manavi; Ali Hamzeh
Abstract
With the spread of information technology in human life, data protection is a critical task. On the other hand, malicious programs are developed, which can manipulate sensitive and critical data and restrict access to this data. Ransomware is an example of such a malicious program that encrypts data, ...
Read More
With the spread of information technology in human life, data protection is a critical task. On the other hand, malicious programs are developed, which can manipulate sensitive and critical data and restrict access to this data. Ransomware is an example of such a malicious program that encrypts data, restricts users' access to the system or their data, and then request a ransom payment. Many types of research have been proposed for ransomware detection. Most of these methods attempt to identify ransomware by relying on program behavior during execution. The main weakness of these methods is that it is not explicit how long the program should be monitored to show its real behavior. Therefore, sometimes, these researches cannot detect ransomware early. In this paper, a new method for ransomware detection is proposed that does not need executing the program and uses the PE header of the executable file. To extract effective features from the PE header file, an image is constructed based on PE header. Then, according to the advantages of Convolutional Neural Networks in extracting features from images and classifying them, CNN is used. The proposed method achieves high detection rates. Our results indicate the usefulness and practicality of our method for ransomware detection.
S. Avizheh; M. Rajabzadeh Asaar; M. Salmasizadeh
Abstract
A convertible limited (multi-) verifier signature (CL(M)VS) provides controlled verifiability and preserves the privacy of the signer. Furthermore, limited verifier(s) can designate the signature to a third party or convert it into a publicly verifiable signature upon necessity. In this proposal, we ...
Read More
A convertible limited (multi-) verifier signature (CL(M)VS) provides controlled verifiability and preserves the privacy of the signer. Furthermore, limited verifier(s) can designate the signature to a third party or convert it into a publicly verifiable signature upon necessity. In this proposal, we first present a generic construction of convertible limited verifier signature (CLVS) into which the existing secure CLVS schemes fit. Afterwards, we extend this generic construction to address the unsolved question of designing an efficient construction with more than two limited verifiers. To this effect, two generic CLMVS constructions are presented, which are proven to be efficient in that they generate a unique signature for more than two limited verifiers. Given the first generic construction, each limited verifier checks the validity of the signature solely, while in the second, cooperation of all limited verifiers is imperative. Thereupon, on the ground of our second generic construction, we present the first pairing-based CLMVS scheme secure in the standard model, which is of a strong confirmation property as well. Finally, we employ the proposed CLMVS scheme for one limited verifier (CLVS) so as to design a new electronic voting protocol.
Maryam Azadmanesh; Behrouz Shahgholi Ghahfarokhi; Maede Ashouri-Talouki
Abstract
Using generative models to produce unlimited synthetic samples is a popular replacement for database sharing. Generative Adversarial Network (GAN) is a popular class of generative models which generates synthetic data samples very similar to real training datasets. However, GAN models do not necessarily ...
Read More
Using generative models to produce unlimited synthetic samples is a popular replacement for database sharing. Generative Adversarial Network (GAN) is a popular class of generative models which generates synthetic data samples very similar to real training datasets. However, GAN models do not necessarily guarantee training privacy as these models may memorize details of training data samples. When these models are built using sensitive data, the developers should ensure that the training dataset is appropriately protected against privacy leakage. Hence, quantifying the privacy risk of these models is essential. To this end, this paper focuses on evaluating the privacy risk of publishing the generator network of GAN models. Specially, we conduct a novel generator white-box membership inference attack against GAN models that exploits accessible information about the victim model, i.e., the generator’s weights and synthetic samples, to conduct the attack. In the proposed attack, an auto-encoder is trained to determine member and non-member training records. This attack is applied to various kinds of GANs. We evaluate our attack accuracy with respect to various model types and training configurations. The results demonstrate the superior performance of the proposed attack on non-private GANs compared to previous attacks in white-box generator access. The accuracy of the proposed attack is 19% higher on average than similar work. The proposed attack, like previous attacks, has better performance for victim models that are trained with small training sets.
Omed Hassan Ahmed; Joan Lu; Qiang Xu; Muzhir Shaban Al-Ani
Abstract
Standard face recognition algorithms that use standard feature extraction techniques always suffer from image performance degradation. Recently, singular value decomposition and low-rank matrix are applied in many applications,including pattern recognition and feature extraction. The main objective ...
Read More
Standard face recognition algorithms that use standard feature extraction techniques always suffer from image performance degradation. Recently, singular value decomposition and low-rank matrix are applied in many applications,including pattern recognition and feature extraction. The main objective of this research is to design an efficient face recognition approach by combining many techniques to generate efficient recognition results. The implemented facerecognition approach is concentrated on obtaining significant rank matrix via applying a singular value decomposition technique. Measures of dispersion are used to indicate the distribution of data. According to the applied ranks, thereis an adequate reasonable rank that is important to reach via the implemented procedure. Interquartile range, mean absolute deviation, range, variance, and standard deviation are applied to select the appropriate rank. Rank 24, 12, and 6reached an excellent 100% recognition rate with data reduction up to 2 : 1, 4 : 1 and 8 : 1 respectively. In addition, properly selecting the adequate rank matrix is achieved based on the dispersion measures. Obtained results on standard face databases verify the efficiency and effectiveness of the implemented approach.
M. Mehrnejad; A. Ghaemi Bafghi; A. Harati; E. Toreini
Abstract
As protection of web applications are getting more and more important every day, CAPTCHAs are facing booming attention both by users and designers. Nowadays, it is well accepted that using visual concepts enhance security and usability of CAPTCHAs. There exist few major different ideas for designing ...
Read More
As protection of web applications are getting more and more important every day, CAPTCHAs are facing booming attention both by users and designers. Nowadays, it is well accepted that using visual concepts enhance security and usability of CAPTCHAs. There exist few major different ideas for designing image CAPTCHAs. Some methods apply a set of modifications such as rotations to the original image saved in the data base, to make the CAPTCHA more secure. In this paper, two different approaches for designing image based CAPTCHAs are introduced. The first one _ which is called Tagging image CAPTCHA _ is based on pre-tagged images, using geometric transformations to increase security, and the second approach tries to enhance the first one by eliminating the use of tags and relying on semantic visual concepts. In fact, recognition of upright orientation is used as a visual cue. The usability of the proposed approaches is verified using human subjects. An estimation of security is also obtained by different kinds of attacks. Further studies are done on the proposed transformations and also on the properness of each original image for each approach. Results suggest a practical Semantic Image CAPTCHA which is usable and secure compared to its peers.
R. Ramezanian; M. Pourpouneh
Abstract
We propose a new online sortition protocol which is decentralized. We argue that our protocol has safety, fairness, randomness, non-reputation and openness properties. Sortition is a process that makes random decision and it is used in competitions and lotteries to determine who is the winner. In the ...
Read More
We propose a new online sortition protocol which is decentralized. We argue that our protocol has safety, fairness, randomness, non-reputation and openness properties. Sortition is a process that makes random decision and it is used in competitions and lotteries to determine who is the winner. In the real world, sortition is simply done using a lottery machine and all the participant can be sure about the safety, fairness, randomness, non-reputation, and openness properties. But how we can do the sortition in virtual world such that it satisfies the desired properties? The idea is decentralization. Using cryptography notions, we provide a protocol where all agents participate in computing the winner of sortition. Our proposed protocol is novel and completely differs from other sortition protocols and also it is decentralized. It is simple and easily can be implemented and find the commercial use for those markets who want to give present to their customers in a fair and clear manner.
A. Khalesi; H. Bahramgiri; D. Mansuri
Abstract
Impossible differential cryptanalysis, the extension of differential cryptanalysis, is one of the most efficient attacks against block ciphers. This cryptanalysis method has been applied to most of the block ciphers and has shown significant results. Using structures, key schedule considerations, early ...
Read More
Impossible differential cryptanalysis, the extension of differential cryptanalysis, is one of the most efficient attacks against block ciphers. This cryptanalysis method has been applied to most of the block ciphers and has shown significant results. Using structures, key schedule considerations, early abort, and pre-computation are some common methods to reduce complexities of this attack. In this paper, we present a new method for decreasing the time complexity of impossible differential cryptanalysis through breaking down the target key space into subspaces, and extending the results on subspaces to the main target key space. The main advantage of this method is that there is no need to consider the effects of changes in the values of independent key bits on each other. Using the 14-round impossible differential characteristic observed by Boura et al. at ASIACRYPT 2014, we implement this method on 23-round LBlock and demonstrate that it can reduce the time complexity of the previous attacks to 271.8 23-round encryptions using 259 chosen plaintexts and 2 73 blocks of memory.
M. Safarzadeh; M. Taghizadeh; B. Zamani; B. Tork Ladani
Abstract
One of the main requirements for providing software security is the enforcement of access control policies which aim to protect resources of the system against unauthorized accesses. Any error in the implementation of such policies may lead to undesirable outcomes. For testing the implementation of access ...
Read More
One of the main requirements for providing software security is the enforcement of access control policies which aim to protect resources of the system against unauthorized accesses. Any error in the implementation of such policies may lead to undesirable outcomes. For testing the implementation of access control policies, it is preferred to use automated methods which are faster and more reliable. Although several researches are conducted for automated testing of the specification of access control policies at the design phase, there is not enough research on testing their implementation. In addition, since access control is amongst non-functional requirements of the system, it is not easy to test them along with other requirements of the system by usual methods. To address this challenge, in this paper, we propose an automated method for testing the implementation of access control in a system. This method, as a model based technique, is able to extract test cases for evaluating the access control policies of the system under test. To generate test cases automatically, a combination of behavior model of the system and the specification of access control policies are used. The experimental results show that the proposed approach is able to find the failures and cover most of the code that is related to access control policies.
M. Vosoughi; A. Jahanian
Abstract
Nowadays, bulk of the designers prefer to outsource some parts of their design and fabrication process to the third-part companies due to the reliability problems, manufacturing cost and time-to-market limitations. In this situation, there are a lot of opportunities for malicious alterations by the off-shore ...
Read More
Nowadays, bulk of the designers prefer to outsource some parts of their design and fabrication process to the third-part companies due to the reliability problems, manufacturing cost and time-to-market limitations. In this situation, there are a lot of opportunities for malicious alterations by the off-shore companies. In this paper, we proposed a new placement algorithm that hinders the hardware Trojan insertion or simplifies the detection process in existence of Trojans. Experimental results show that the proposed placement improves the Trojan detectability of the attempted benchmarks against Trojan insertion more than 20% in reasonable cost in delay and wire length.
Reza Ebrahimi Atani; Shahabaddin Ebrahimi Atani; Amir Hassani Karbasi
Abstract
\emph{ Smooth Projective Hash Functions } ( SPHFs ) as a specific pattern of zero knowledge proof system are fundamental tools to build many efficient cryptographic schemes and protocols. As an application of SPHFs, \emph { Password - Based Authenticated Key Exchange } ( PAKE ) protocol is well-studied ...
Read More
\emph{ Smooth Projective Hash Functions } ( SPHFs ) as a specific pattern of zero knowledge proof system are fundamental tools to build many efficient cryptographic schemes and protocols. As an application of SPHFs, \emph { Password - Based Authenticated Key Exchange } ( PAKE ) protocol is well-studied area in the last few years. In 2009, Katz and Vaikuntanathan described the first lattice-based PAKE using the Learning With Errors ( LWE ) problem. In this work, we present a new efficient \emph { ring-based } smooth projectice hash function `` ( Ring - SPHF ) " using Lyubashevsky, Peikert, and Regev's dual-style cryptosystem based on the Learning With Errors over Rings ( Ring - LWE ) problem. Then, using our ring-SPHF, we propose the first efficient password-based authenticated key exchange ` ` ( Ring - PAKE ) " protocol over \emph{ rings } whose security relies on ideal lattice assumptions.
M. Zabihi; M. Vafaei Jahan; J. Hamidzadeh
Abstract
Today world's dependence on the Internet and the emerging of Web 2.0 applications is significantly increasing the requirement of web robots crawling the sites to support services and technologies. Regardless of the advantages of robots, they may occupy the bandwidth and reduce the performance of web ...
Read More
Today world's dependence on the Internet and the emerging of Web 2.0 applications is significantly increasing the requirement of web robots crawling the sites to support services and technologies. Regardless of the advantages of robots, they may occupy the bandwidth and reduce the performance of web servers. Despite a variety of researches, there is no accurate method for classifying huge data sets of web visitors in a reasonable amount of time. Moreover, this technique should be insensitive to the ordering of instances and produce deterministic accurate results. Therefore, this paper presents a density-based clustering approach using Density-Based Spatial Clustering of Applications with Noises (DBSCAN), to classify web visitors of two real large data sets. We propose two new features based on the behavioral patterns of visitors to describe them. What's more, we consider 12 common features and use the significance of the difference test (T-test) to reduce the dimensions and overcome one of the disadvantages of DBSCAN. Based on the supervised evaluation metrics, the proposed algorithm has the 95% of Jaccard metric and produces two clusters having the entropy and purity rates of 0.024 and 0.97, respectively. Furthermore, from the standpoint of clustering quality and accuracy, the proposed method performs better than state-of-the-art algorithms. Finally, it can be concluded that some known web robots through imitating human users make it difficult to be identified.
Mitra Alidoosti; Alireza Nowroozi; Ahmad Nickabadi
Abstract
Parallel execution of multiple threads of a web application will result in server-side races if the web application is not synchronized correctly. Server-side race is susceptible to flaws in the relation between the server and the database. Detecting the race condition in the web applications depends ...
Read More
Parallel execution of multiple threads of a web application will result in server-side races if the web application is not synchronized correctly. Server-side race is susceptible to flaws in the relation between the server and the database. Detecting the race condition in the web applications depends on the business logic of the application. No logic-aware approach has been presented to deal with race conditions. Furthermore, most existing approaches either result in DoS or are not applicable with false positive. In this study, the session puzzling race conditions existing in a web application are classified and described. In addition, we present Business-Layer Session Puzzling Racer, a black-box approach for dynamic application security testing, to detect the business-layer vulnerability of the application against session puzzling race conditions. Experiments on well-known and widely used web applications showed that Business-Layer Session Puzzling Racer is able to detect the business layer vulnerabilities of these applications against race conditions. In addition, the amount of traffic generated to identify the vulnerabilities has been improved by about 94.38% by identifying the business layer of the application. Thus, Business-Layer Session Puzzling Racer does not result in DoS.
E. Hassani; M. Eshghi
Abstract
The present paper is aimed at introducing a new algorithm for image encryption using chaotic tent maps and the desired key image. This algorithm consists of two parts, the first of which works in the frequency domain and the second, in the time domain. In the frequency domain, a desired key image is ...
Read More
The present paper is aimed at introducing a new algorithm for image encryption using chaotic tent maps and the desired key image. This algorithm consists of two parts, the first of which works in the frequency domain and the second, in the time domain. In the frequency domain, a desired key image is used, and a random number is generated, using the chaotic tent map, in order to change the phase of the plain image. This change in the frequency domain causes changes in the pixels value and shuffles the pixels location in the time domain. Finally, in the time domain, a pseudo random image is produced using a chaotic tent map, to be combined to the image generated through the first step, and thus the final encrypted image is created. A computer simulation is also utilized to evaluate the proposed algorithm and to compare its results to images encrypted by other methods. The criteria for these comparisons are chi-square test of histogram, correlation coefficients of pixels, NPCR (number of pixel change rate), UACI (unified average changing intensity), MSE (mean square error) and MAE (mean absolute error), key space, and sensitivity to initial condition. These comparisons reveal that the proposed chaotic image encryption method shows a higher performance, and is of more secure.
Mohammad Ebrahim Ebrahimi Kiasari; Nasrollah Pakniat; Abdolrasoul Mirghadri; Mojtaba Nazari
Abstract
Secret sharing (SS) schemes allow the sharing of a secret among a set of trustees in such a way that only some qualified subsets of them can recover the secret. Ordinary SS schemes assume that the trust to each trustee is fixed over time. However, this is not the case in many real scenarios. Social secret ...
Read More
Secret sharing (SS) schemes allow the sharing of a secret among a set of trustees in such a way that only some qualified subsets of them can recover the secret. Ordinary SS schemes assume that the trust to each trustee is fixed over time. However, this is not the case in many real scenarios. Social secret sharing (SSS) is a recently introduced type of SS that addresses this issue. It allows the sharing of a secret among a set of trustees such that the amount of trust to each participant could be changed over time. There exist only a few SSS schemes in the literature; most of them can share only one secret during each execution. Hence, these schemes lack the required efficiency in situations where multiple secrets need to be shared. According to the literature, there exists only one social multi-secret sharing (SMSS) scheme in which, all the secrets are reconstructed at one stage. However, in many applications, the secrets should be recovered in multiple stages and even according to some specified order. To address these problems, this paper employs Birkhoff interpolation method and Chinese remainder theorem and proposes a new SMSS scheme. In the proposed scheme, the shareholders can recover the secrets in different stages and according to the specified order by the dealer. The security analysis of the proposed scheme shows that it provides all the needed security requirements. In addition, the performance analysis of the proposed scheme indicates its overall superiority over the related schemes.
Dharmaraj Rajaram Patil; Jayantrao Patil
Abstract
Nowadays, malicious URLs are the common threat to the businesses, social networks, net-banking etc. However, malicious URLs deal with various Web attacks like phishing, spamming and malware distribution. Existing approaches have focused on binary detection i.e. either the URL is malicious or benign. ...
Read More
Nowadays, malicious URLs are the common threat to the businesses, social networks, net-banking etc. However, malicious URLs deal with various Web attacks like phishing, spamming and malware distribution. Existing approaches have focused on binary detection i.e. either the URL is malicious or benign. Very few literature is found which focused on the detection of malicious URLs and their attack types. Hence, it becomes necessary to know the attack type and adopt an effective countermeasure. This paper proposed a methodology to detect malicious URLs and the type of attacks based on multi-class classification. In this work, we proposed 42 new features of spam, phishing and malware URLs like URL Features, URL Source Features, Domain Name Features and Short URLs Features. These features are not considered in the earlier studies for malicious URLs detection and attack types identification. Binary and multi-class dataset is constructed using 49935 malicious and benign URLs. It consists of 26041 benign and 23894 malicious URLs containing 11297 malware,8976 phishing and 3621 spam URLs. To evaluate the proposed approach, state of the art supervised batch and online machine learning classifiers are used. Experiments are performed on the binary andmulti-class dataset using the aforementioned machine learning classifiers. It is found that, confidence weighted learning classifier achieved the best 98.44% average detection accuracy with 1.56% error-rate in the multi-class setting and 99.86% detection accuracy with negligible error-rate of 0.14% in binary setting using our proposed URL features.
H. Shakeri; A. Ghaemi Bafghi
Abstract
It is a common and useful task in a web of trust to evaluate the trust value between two nodes using intermediate nodes. This technique is widely used when the source node has no experience of direct interaction with the target node, or the direct trust is not reliable enough by itself. If trust is used ...
Read More
It is a common and useful task in a web of trust to evaluate the trust value between two nodes using intermediate nodes. This technique is widely used when the source node has no experience of direct interaction with the target node, or the direct trust is not reliable enough by itself. If trust is used to support decision-making, it is important to have not only an accurate estimate of trust, but also a measure of confidence in the intermediate nodes as well as the final estimated value of trust. The present paper thus aims to introduce a novel framework for integrated representation of trust and confidence using intervals, which provides two operations of trust interval multiplication and summation. The former is used for computing propagated trust and confidence, whereas the latter provides a formula for aggregating different trust opinions. The properties of the two operations are investigated in details. This study also proposes a time-variant method that considers freshness, expertise level and two similarity measures in confidence estimation. The results indicate that this method is more accurate compared to the existing ones. In this regard, the results of experiments carried out on two well-known trust datasets are reported and analyzed, showing that the proposed method increases the accuracy of trust inference in comparison with the existing methods.
M. Niknafs; S. Dorri Nogoorani; R. Jalili
Abstract
Reputation management systems are in wide-spread use to regulate collaborations in cooperative systems. Collusion is one of the most destructive malicious behaviors in which colluders seek to affect a reputation management system in an unfair manner. Many reputation systems are vulnerable to collusion, ...
Read More
Reputation management systems are in wide-spread use to regulate collaborations in cooperative systems. Collusion is one of the most destructive malicious behaviors in which colluders seek to affect a reputation management system in an unfair manner. Many reputation systems are vulnerable to collusion, and some model-specific mitigation methods are proposed to combat collusion. Detection of colluders is shown to be an NP-complete problem. In this paper, we propose the Colluders Similarity Measure (CSM) which is used by a heuristic clustering algorithm (the Colluders Detection Algorithm (CDA)) to detect colluders in O (n2m + n4) in which m and n are the total number of nodes and colluders, respectively. Furthermore, we propose an architecture to implement the algorithm in a distributed manner which can be used together with compatible reputation management systems. Implementation results and comparison with other mitigation methods show that our scheme prevents colluders from unfairly increasing their reputation and decreasing the reputation of the other nodes.
M. Safkhani; N. Bagheri
Abstract
Recently, Baghery et al. [1, 2] presented some attacks on two RFID protocols, namely Yoon and Jung et al. protocols, and proposed the improved version of them. However, in this note, we show that the improved version of the Jung et al. protocol suffers from desynchronization attack and the improved version ...
Read More
Recently, Baghery et al. [1, 2] presented some attacks on two RFID protocols, namely Yoon and Jung et al. protocols, and proposed the improved version of them. However, in this note, we show that the improved version of the Jung et al. protocol suffers from desynchronization attack and the improved version of the Yoon's protocol suffers from secret disclosure attack. The success probability of the desynchronization attack against the improved version of the Jung et al. protocol is (1-2-2n)2, where n is length of the protocol parameters. The attack can be accomplished with just three runs of the protocol. The success probability of the secret disclosure attack against the improved version of the Yoon's protocol is almost 1, while the complexity is just two runs of the protocol and doing 216 off-line evaluations of PRNG function.
J. Hajian Nezhad; Majid Vafaei Jahan; M. Tayarani-N; Z. Sadrnezhad
Abstract
Recent improvements in web standards and technologies enable the attackers to hide and obfuscate infectious codes with new methods and thus escaping the security filters. In this paper, we study the application of machine learning techniques in detecting malicious web pages. In order to detect malicious ...
Read More
Recent improvements in web standards and technologies enable the attackers to hide and obfuscate infectious codes with new methods and thus escaping the security filters. In this paper, we study the application of machine learning techniques in detecting malicious web pages. In order to detect malicious web pages, we propose and analyze a novel set of features including HTML, JavaScript (jQuery library) and XSS attacks. The proposed features are evaluated on a data set that is gathered by a crawler from malicious web domains, IP and address black lists. For the purpose of evaluation, we use a number of machine learning algorithms. Experimental results show that using the proposed set of features, the C4.5-Tree algorithm offers the best performance with 97.61% accuracy, and F1-measure has 96.75% accuracy. We also rank the quality of the features. Experimental results suggest that nine of the proposed features are among the twenty best discriminative features.
Mohammad Reza Mohammadrezaei; Mohammad Ebrahim Shiri; Amir Masoud Rahmani
Abstract
Detection of fake accounts on social networks is a challenging process. The previous methods in identification of fake accounts have not considered the strength of the users’ communications, hence reducing their efficiency. In this work, we are going to present a detection method based on the users’ ...
Read More
Detection of fake accounts on social networks is a challenging process. The previous methods in identification of fake accounts have not considered the strength of the users’ communications, hence reducing their efficiency. In this work, we are going to present a detection method based on the users’ similarities considering the network communications of the users. In the first step, similarity measures somethings such as common neighbors, common neighbors graph edges, cosine, and the Jaccard similarity coefficient are calculated based on adjacency matrix of the corresponding graph of the social network. In the next step, in order to reduce the complexity of data, Principal Component Analysis is applied to each computed similarity matrix to provide a set of informative features. then, a set of highly informative eigenvectors are selected using elbow-method. Extracted features are employed to train a One Class Classification (OCC) algorithm. Finally, this trained model is employed to identify fake accounts. As our experimental results indicate the promising performance of the proposed method a detection accuracy and false negative rates are 99.6% and 0%, respectively. We conclude that bringing similarity measures and One Class Classification algorithms into play, rather than the multi-class algorithms, provide better results.
Sh. Shamaei; A. Movaghar
Abstract
Mobile ad-hoc networks (MANETs) have no fixed infrastructure, so all network operations such as routing and packet forwarding are done by the nodes themselves. However, almost all common existing routing protocols basically focus on performance measures regardless of security issues. Since these protocols ...
Read More
Mobile ad-hoc networks (MANETs) have no fixed infrastructure, so all network operations such as routing and packet forwarding are done by the nodes themselves. However, almost all common existing routing protocols basically focus on performance measures regardless of security issues. Since these protocols consider all nodes to be trustworthy, they are prone to serious security threats. Wormhole attack is a kind of such threats against routing processes which is particularly a challenging problem to detect and prevent in MANETs. In this paper, a two-phase detection scheme is proposed to detect and prevent wormhole attacks. First phase checks whether a wormhole tunnel exists on the selected path or not. If there is such a tunnel, the second phase is applied to confirm the existence of the wormhole attack, and locate a malicious node. The proposed detection scheme can appropriately detect all types of this kind of attacks such as in-band and out-of-band ones in different modes such as hidden or exposed, without any need of special hardware or time synchronization. In order to evaluate the performance of the proposed scheme, some various scenarios are simulated in the NS-2 simulator, and different measures are assessed. The results obtained from simulating the proposed scheme and other benchmarks indicate that in most criteria considered in this paper, the proposed scheme outperforms the proposed methods in prior works.
Parvin Rastegari
Abstract
The certificateless public key cryptography (CL-PKC) setting, makes it possible to overcome the problems of the conventional public key infrastructure and the ID-Based public key cryptography, concurrently. A certificateless signcryption (CL-SC) scheme is an important cryptographic primitive which provides ...
Read More
The certificateless public key cryptography (CL-PKC) setting, makes it possible to overcome the problems of the conventional public key infrastructure and the ID-Based public key cryptography, concurrently. A certificateless signcryption (CL-SC) scheme is an important cryptographic primitive which provides the goals of a signature scheme and an encryption scheme both at once, in a certificateless setting. In addition to the basic security requirements of a CL-SC scheme (i. e. the unforgeability and the confidentiality), a new security notion called as the known session specific temporary information security (KSSTIS) has been proposed in the literature, recently. This security notion guarantees the confidentiality of the message even if the temporary information, used for creating the signcryption on the message, reveals. However, as discussed in the literature, there are not any secure CL-SC schemes in the standard model (i. e. without the assumption of random oracles) which guarantees the KSSTIS. In this paper, three recently proposed CL-SC schemes (Caixue, Shan and Ullah et al.'s schemes) are analyzed and it is shown that these schemes not only do not satisfy the KSSTIS, but also they do not even provide the basic security requirements of a CL-SC scheme. Furthermore, an enhanced secure CL-SC scheme is proposed in the standard model which satisfies the KSSTIS.