Ehab Khatter; Dina Ibrahim
Abstract
Time Saving and energy consumption has become a vital issue that attracts the attention of researchers in Underwater Wireless Sensor Networks (UWSNs) fields. According to that, there is a strong need to improve MAC protocols performance in UWSNs, particularly enhancing the effectiveness ...
Read More
Time Saving and energy consumption has become a vital issue that attracts the attention of researchers in Underwater Wireless Sensor Networks (UWSNs) fields. According to that, there is a strong need to improve MAC protocols performance in UWSNs, particularly enhancing the effectiveness of ALOHA Protocol. In this paper, a time-saving Aloha protocol with slotted carrier sense proposed which we called, ST-Slotted-CS-ALOHA protocol. The results of thesimulation demonstrate that our proposed protocol can save time and decrease the average delay when it compared with the other protocols. Moreover, it decreased energy consumption and raised the ratio of throughput. However, the number of dropped nodes does not give better results compared to other protocols.
Felisa Córdova; Claudia Durán; Fredi Palominos
Abstract
Port organizations have focused their efforts on physical or tangible assets, generating profitability and value. However, it is recognized that the greatest sustainable competitive advantage is the creation of knowledge using the intangible assets of the organization. The Balanced ScoreCard, ...
Read More
Port organizations have focused their efforts on physical or tangible assets, generating profitability and value. However, it is recognized that the greatest sustainable competitive advantage is the creation of knowledge using the intangible assets of the organization. The Balanced ScoreCard, as a performance tool, has incorporated intangible assets such as intellectual, structural and social capital into management. In this way, the port community can count on new forms of managing innovation, strengthening organizational practices, and increasing collaborative work teams. In this study, the concepts from analysis of the cognitive SWOT are applied to diagnose the port activity and its community. In workshops with experts and from the vision, mission, cognitive SWOT and strategies, a cognitive strategic map considering strategic objectives and indicators is designed in the customer, processes, and learning and growth axis for the port and port community. Causal relationships between objectives, associated indicators and incidence factors are established in a forward way from learning and growth axis to customer axis. Then, the incidence matrix is developed and the direct and indirect effects between factors are analyzed, which allows recommending the future course of the port and its community.
Ahmed BaniMustafa
Abstract
This paper presents a data mining application in metabolomics. It aims at building an enhanced machine learning classifier that can be used for diagnosing cachexia syndrome and identifying its involved biomarkers. To achieve this goal, a data-driven analysis is carried out using a public ...
Read More
This paper presents a data mining application in metabolomics. It aims at building an enhanced machine learning classifier that can be used for diagnosing cachexia syndrome and identifying its involved biomarkers. To achieve this goal, a data-driven analysis is carried out using a public dataset consisting of 1H-NMR metabolite profile. This dataset suffers from the problem of imbalanced classes which is known to deteriorate the performance of classifiers. It also influences its validity and generalizablity. The classification models in this study were built using five machine learning algorithms known as PLS-DA, MLP, SVM, C4.5 and ID3. This model is built after carrying out a number of intensive data preprocessing procedures to tackle the problem of imbalanced classes and improve the performance of the constructed classifiers.These procedures involves applying data transformation, normalization, standardization, re-sampling and data reduction procedures using a number of variables importance scorers. The best performance was achieved by building an MLP model that was trained and tested using five-fold cross-validation using datasets that were re-sampled using SMOTE method and then reduced using SVM variable importance scorer. This model was successful in classifying samples with excellent accuracy and also in identifying the potential disease biomarkers. The results confirm the validity of metabolomics data mining for diagnosis of cachexia. It also emphasizes the importance of data preprocessing procedures such as sampling and data reduction for improving data mining results, particularly when data suffers from the problem of imbalanced classes.
Ayman Atia; Nada Shorim
Abstract
Classifications of several gesture types are very helpful in several applications. This paper tries to address fast classifications of hand gestures using DTW over multi-core simple processors. We presented a methodology to distribute templates over multi-cores and then allow parallel ...
Read More
Classifications of several gesture types are very helpful in several applications. This paper tries to address fast classifications of hand gestures using DTW over multi-core simple processors. We presented a methodology to distribute templates over multi-cores and then allow parallel execution of the classification. The results were presented to voting algorithm in which the majority vote was used for the classification purpose. The speed of processing has increased dramatically due to using multi-core processors and DTW.
Muhammad Ali Memon; Zaira Hassan; Kamran Dahri; Asadullah Shaikh; Muhammad Ali Nizamani
Abstract
With the emerging concept of model transformation, information can be extracted from one or more source models to produce the target models. The conversion of these models can be done automatically with specific transformation languages. This conversion requires mapping between both models ...
Read More
With the emerging concept of model transformation, information can be extracted from one or more source models to produce the target models. The conversion of these models can be done automatically with specific transformation languages. This conversion requires mapping between both models with the help of dynamic hash tables. Hash tables store reference links between the elements of the source and target model. Whenever there is a need to access the target element, we query the hash table. In contrast, this paper presents an approach by directly creating aspects in the source meta-model with traces. These traces hold references to target elements during the execution. Illustrating the idea of model driven engineering (MDE), This paper proposes a method that transforms UML class models to EMF ECORE model.
Evelina Pencheva; Ivaylo Atanasov; Ivaylo Asenov
Abstract
The fifth generation (5G) system architecture is defined as service-based and the core network functions are described as sets of services accessible through application programming interfaces (API). One of the components of 5G is Multi-access Edge Computing (MEC) which provides the open ...
Read More
The fifth generation (5G) system architecture is defined as service-based and the core network functions are described as sets of services accessible through application programming interfaces (API). One of the components of 5G is Multi-access Edge Computing (MEC) which provides the open access to radio network functions through API. Using the mobile edge API third party analytics applications may provide intelligence in the vicinity of end users which improves network performance and enhances user experience. In this paper, we propose new mobile edge API to access and control the mobility at the network edge. The application logic for provisioning access and mobility policies may be based on considerations like load level information per radio network slice instance, user location, accumulated usage, local policy, etc. We describe the basic API functionality by typical use cases and provide the respective data model, which represents the resource structure and data types. Some implementation aspects, related to modeling the resource states as seen by a mobile edge application and by the network, are discussed.
Mohammed J.F. Alenazi
Abstract
Standard TCP is the de facto reliable transfer protocol for the Internet. It is designed to establish a reliable connection using only a single network interface. However, standard TCP with single interfacing performs poorly due to intermittent node connectivity. This requires the re-establishment of ...
Read More
Standard TCP is the de facto reliable transfer protocol for the Internet. It is designed to establish a reliable connection using only a single network interface. However, standard TCP with single interfacing performs poorly due to intermittent node connectivity. This requires the re-establishment of connections as the IP addresses change. Multi-path TCP (MPTCP) has emerged to utilize multiple network interfaces in order to deliver higher throughput. Resilience to link failures can be better supported in MPTCP as the segments’ communication are maintained via alternative interfaces. In this paper, the resilience of MPTCP to link failures against several challenges is evaluated. Several link failure scenarios are applied to examine all aspects of MPTCP including congestion algorithms, path management, and subflow scheduling. In each scenario, the behavior of MPTCP is studied by observing and analyzing the throughput and delay. The evaluation of the results indicates MPTCP resilience to a low number of failed links. However, as the number of failed links increases, MPTCP can only recover full throughput if the link failure occurs on the server side. In addition, in the presence of link failures, the lowestRTT MPTCP scheduler yields the shortest delivery time while providing the minimum application jitter.
Joel E. Cordeiro Junior; Marcelo S. Alencar; José V. dos Santos Filho; Karcius D. R. Assis
Abstract
This paper presents an investigation on the performance of the Non-Orthogonal Multiple Access (NOMA) in the power domain scheme. A Power Allocation (PA) method is proposed from NOMA throughput expression analysis. This method aims to provide fair opportunities for users to improve their ...
Read More
This paper presents an investigation on the performance of the Non-Orthogonal Multiple Access (NOMA) in the power domain scheme. A Power Allocation (PA) method is proposed from NOMA throughput expression analysis. This method aims to provide fair opportunities for users to improve their performance. Thus, NOMA users can achieve rates higher than, or equal to, the rates obtained with the conventional Orthogonal Multiple Access (OMA) in the frequency domain schemes. The proposed method is evaluated and compared with others PA techniques by computer system level simulations. The results obtained indicate that the proposed method increases the average cell spectral efficiency andmaintains a good fairness level with regard to the resource allocation among the users within a cell.
Matthias Gottlieb; Marwin Shraideh; Isabel Fuhrmann; Markus Böhm; Helmut Krcmar
Abstract
Data Virtualization (DV) has become an important method to store and handle data cost-efficiently. However, it is unclear what kind of data and when data should be virtualized or not. We applied a design science approach in the first stage to get a state of the art of DV regarding data ...
Read More
Data Virtualization (DV) has become an important method to store and handle data cost-efficiently. However, it is unclear what kind of data and when data should be virtualized or not. We applied a design science approach in the first stage to get a state of the art of DV regarding data integration and to present a concept matrix. We extend the knowledge base with a systematic literature review resulting in 15 critical success factors for DV. Practitioners can use these critical success factors to decide between DV and Extract, Transform, Load (ETL) as data integration approach.
Saad Ali Alahmari
Abstract
The increasing volatility in pricing and growing potential for profit in digital currency have made predicting the price of cryptocurrency a very attractive research topic. Several studies have already been conducted using various machine-learning models to predict crypto currency prices. ...
Read More
The increasing volatility in pricing and growing potential for profit in digital currency have made predicting the price of cryptocurrency a very attractive research topic. Several studies have already been conducted using various machine-learning models to predict crypto currency prices. This study presented in this paper applied a classic Autoregressive Integrated Moving Average(ARIMA) model to predict the prices of the three major cryptocurrencies âAT Bitcoin, XRP and Ethereum âAT using daily, weekly and monthly time series. The results demonstrated that ARIMA outperforms most other methods in predicting cryptocurrency prices on a daily time series basis in terms of mean absolute error (MAE), mean squared error (MSE) and root mean squared error(RMSE).
Anwar Saeed; Muhammad Yousif; Areej Fatima; Sagheer Abbas; Muhammad Adnan Khan; Leena Anum; Ali Akram
Abstract
With the innovation of cloud computing industry lots of services were provided based on different deployment criteria. Nowadays everyone tries to remain connected and demand maximum utilization of resources with minimum timeand effort. Thus, making it an important challenge in cloud computing ...
Read More
With the innovation of cloud computing industry lots of services were provided based on different deployment criteria. Nowadays everyone tries to remain connected and demand maximum utilization of resources with minimum timeand effort. Thus, making it an important challenge in cloud computing for optimum utilization of resources. To overcome this issue, many techniques have been proposed shill no comprehensive results have been achieved. Cloud Computing offers elastic and scalable resource sharing services by using resource management. In this article, a hybrid approach has been proposed with an objective to achieve the maximum resource utilization. In this proposed method, adaptive back propagation neural network and multi-level priority-based scheduling are being carried out for optimum resource utilization. This hybrid technique will improve the utilization of resources in cloud computing. This shows result in simulation-based on the form of MSE and Regression with job dataset, on behalf of the comparison of three algorithms like Scaled Conjugate Gradient (SCG), Levenberg Marquardt (LM) and Bayesian Regularization (BR). BR gives a better result with 60 hidden layers Neurons to other algorithms. BR gives 2.05 MSE and 95.8 regressions in Validation, LM gives 2.91 MSE and 94.06 regressions with this and SCG gives 3.92 MSE and 91.85 regressions.
Tahir Alyas; Gulzar Ahmad; Yousaf Saeed; Muhammad Asif; Umer Farooq; Asma Kanwal
Abstract
Internet of Things (IoT) and cloud computing technologies have connected the infrastructure of the city to make the context-aware and more intelligent city for utility its major resources. These technologies have much potential to solve thechallenges of urban areas around the globe to facilitate ...
Read More
Internet of Things (IoT) and cloud computing technologies have connected the infrastructure of the city to make the context-aware and more intelligent city for utility its major resources. These technologies have much potential to solve thechallenges of urban areas around the globe to facilitate the citizens. A framework model that enables the integration of sensor’s data and analysis of the data in the context of smart parking is proposed. These technologies use sensors anddevices deployed around the city parking areas sending real time data through the edge computers to the main cloud servers. Mobil-Apps are developed that used real time data, set from servers of the parking facilities in the city. Fuzzification is shown to be a capable mathematical approach for modeling city parking issues. To solve the city parking problems in cities a detailed analysis of fuzzy logic proposed systems is developed. This paper presents the resultsachieved using Mamdani Fuzzy Inference System to model complex smart parking system. These results are verified using MATLAB simulation.
Wajdi Aljedaibi; Sufian Khamis
Abstract
Project management is an important factor to accomplish the decision to implement large-scale software systems (LSS) in a successful manner. The effective project management comes into play to plan, coordinate and control such a complex project. Project management factor has been argued ...
Read More
Project management is an important factor to accomplish the decision to implement large-scale software systems (LSS) in a successful manner. The effective project management comes into play to plan, coordinate and control such a complex project. Project management factor has been argued as one of the important Critical Success Factor (CSF), which need to be measured and monitored carefully during the implementation of Enterprise Resource Planning(ERP) systems. The goal of this article is to develop âACSF-Live!âAI which is a method for measuring, monitoring, and controlling critical success factors of large-scale software systems. To achieve such goal, we apply CSF-Live for the project management CSF. The CSF-Live uses the Goal/Question/Metric paradigm (GQM) to yield a flexible framework containing several metrics that we used it to develop a formulation to enable the measurement of the project management CSF. The formulation that we developed for the project management CSF implies that the significance of having proper project management when conducting an ERP system implementation, since it is positively associated with the success of the ERP.
Istabraq M. Al-Joboury; Emad H. Al-Hemiary
Abstract
The Internet of Things (IoT) becomes the future of a global data field in which the embedded devices communicate with each other, exchange data and making decisions through the Internet. IoT could improve the quality of life in smart cities, but a massive amount of data from different smart devices could ...
Read More
The Internet of Things (IoT) becomes the future of a global data field in which the embedded devices communicate with each other, exchange data and making decisions through the Internet. IoT could improve the quality of life in smart cities, but a massive amount of data from different smart devices could slow down or crash database systems. In addition, IoT data transfer to Cloud for monitoring information and generating feedback that will lead to high delay in infrastructure level. Fog Computing can help by offering services closer to edge devices. In this paper, we propose an efficient system architecture to mitigate the problem of delay. We provide performance analysis like response time, throughput and packet loss for MQTT (Message Queue Telemetry Transport) and HTTP (Hyper Text Transfer Protocol) protocols based on Cloud or Fog servers with large volume of data from emulated traffic generator working alongside one real sensor . We implement both protocols in the same architecture, with low cost embedded devices to local and Cloud servers with different platforms. The results show that HTTP response time is 12.1 and 4.76 times higher than MQTT Fog and Cloud based located in the same geographical area of the sensors respectively. The worst case in performance is observed when the Cloud is public and outside the country region. The results obtained for throughput shows that MQTT has the capability to carry the data with available bandwidth and lowest percentage of packet loss. We also prove that the proposed Fog architecture is an efficient way to reduce latency and enhance performance in Cloud based IoT.
Oula L. Abdulsattar; Emad H. Al-Hemiary
Abstract
In this paper, we implement a Virtualized Network Management Laboratory named (VNML) linked to college campus network for educational purposes. This laboratory is created using Virtualbox virtualizer and GNS3 on Linux UBUNTU single HP DL380 G7 server platform. A total of 35 virtual devices (Routers, ...
Read More
In this paper, we implement a Virtualized Network Management Laboratory named (VNML) linked to college campus network for educational purposes. This laboratory is created using Virtualbox virtualizer and GNS3 on Linux UBUNTU single HP DL380 G7 server platform. A total of 35 virtual devices (Routers, Switches and Virtual Machines) are created and distributed over virtualized campus network with seven network management tools configured and run. The proposed laboratory is aimed to overcome the limitations of network hardware existence in any educational facility teach network management subject in their curriculum. The other advantages include ease of managing the laboratory and overrides physical location setup within the same geographical area.
Mosleh Zeebaree; Musbah Aqel
Abstract
This paper is one that explored intelligent decision support systems and Decision support systems. Due to the inception and development of systems and technological advances such as data warehouse, enterprise resource planning, advance plan system & also top trends like Internet of things, big data, ...
Read More
This paper is one that explored intelligent decision support systems and Decision support systems. Due to the inception and development of systems and technological advances such as data warehouse, enterprise resource planning, advance plan system & also top trends like Internet of things, big data, internet, business intelligent etc. have brought in more advancement in the operations of decision support systems. This paper gives a systematic review on all the various applications of IDSS based on, knowledge, communication, documents etc. with also heading further to describe and differentiate two DSS methods which are Analytical Network Process (ANP) & Decision-Making Trial & Evaluation Laboratory (DEMATEL)
Hachem H. Alaoui; Elkaber Hachem; Cherif Ziti; Mohammed Karim
Abstract
Because face can reveal so much hidden information, we need to interpret these data and benefit from them. Hence, our paper shows a new and productive facial image representation based on local sensitive hashing (LSH). This strategy makes it conceivable to recognize the students who pursue their preparation ...
Read More
Because face can reveal so much hidden information, we need to interpret these data and benefit from them. Hence, our paper shows a new and productive facial image representation based on local sensitive hashing (LSH). This strategy makes it conceivable to recognize the students who pursue their preparation in our learning training; during every session, an image of the learner will be taken by the webcam to be compared to that already stored in the database. As soon as the learner is recognized, he/she must be arranged in the accordion to an appropriate profile that takes into consideration his/her weaknesses and strength, which is conducted with the help of the J48 as a predictive study. Furthermore, we utilize a light processing module on the client device with a compact code in order that we can have a lot of in formation transmission capable to send the component over the network and to have the option to record many photos in an enormous database in the cloud.
Mona Alsalamah; Huda Alwabli; Hutaf Alqwifli; Dina M. Ibrahim
Abstract
The functionality of web-based system can be affected by many threats. In fact, web-based systems provide several services built on databases. This makes them prone to Structured Query Language (SQL) injection attacks. For that reason, many research efforts have been made to deal with such attack. The ...
Read More
The functionality of web-based system can be affected by many threats. In fact, web-based systems provide several services built on databases. This makes them prone to Structured Query Language (SQL) injection attacks. For that reason, many research efforts have been made to deal with such attack. The majority of the protection techniques adopt defence strategy which resulting to provide, in extreme response time, a lot of positive rates. Indeed, attacks by injecting SQL is always a serious challenge for web-based system. This kind of attack is still attractive for hackers and it is in growing progress. Forthat reason, many researches have been proposed to deal with this issue. The proposed techniques are essentially based on statistical or dynamic approach or using machine learning or even deep learning. This paper discusses and reviews the existing techniques used to detect and prevent SQL injection attack. In addition, it outlines challenges, open issues and future trends of solutions in this context.
Amir Ashtari; Ahmad Shabani; Bijan Alizadeh
Abstract
This paper presents a novel RF-PUF-based authentication scheme, called RKM-PUF which takes advantage of a dynamic random key generation that depends upon both communication parties in the network to detect intrusion attacks. Unlike the existing authentication schemes, our proposed approach takes the ...
Read More
This paper presents a novel RF-PUF-based authentication scheme, called RKM-PUF which takes advantage of a dynamic random key generation that depends upon both communication parties in the network to detect intrusion attacks. Unlike the existing authentication schemes, our proposed approach takes the physical characteristics of both involved parties into account to generate the secret key, resulting in securely mutual authentication of both nodes in a wireless network. The experimental results of the proposed authentication scheme show that the RKM-PUF can reach up to 99% in identification accuracy.
Meharaj Begum A; Michael Arock
Abstract
Whatever malware protection is upcoming, still the data are prone to cyber-attacks. The most threatening Structured Query Language Injection Attack (SQLIA) happens at the database layer of web applications leading to unlimited and unauthorized access to confidential information through malicious code ...
Read More
Whatever malware protection is upcoming, still the data are prone to cyber-attacks. The most threatening Structured Query Language Injection Attack (SQLIA) happens at the database layer of web applications leading to unlimited and unauthorized access to confidential information through malicious code injection. Since feature extraction accuracy significantly influences detection results, extracting the features of a query that predominantly contributes to SQL Injection (SQLI) is the most challenging task for the researchers. So, the proposed work primarily focuses on that using modified parse-tree representation. Some existing techniques used graph representation to identify characteristics of the query based on a predefined fixed list of SQL keywords. As the complete graph representation requires high time complexity for traversals due to the unnecessary links, a modified parse tree of tokens is proposed here with restricted links between operators (internal nodes) and operands (leaf nodes) of the WHERE clause. Tree siblings from the leaf nodes comprise the WHERE clause operands, where the attackers try to manipulate the conditions to be true for all the cases. A novelty of this work is identifying patterns of legitimate and injected queries from the proposed modified parse tree and applying a pattern-based neural network (NN) model for detecting attacks. The proposed approach is applied in various machine learning (ML) models and a neural network model, Multi-Layer Perceptron (MLP). With the scrupulously extracted patterns and their importance (weights) in legitimate and injected queries, the MLP model provides better results in terms of accuracy (97.85%), precision (93.8%) and AUC (97.8%)
Danial Shiraly; Nasrollah Pakniat; Ziba Eslami
Abstract
Public key encryption with keyword search (PEKS) is a cryptographic primitive designed for performing secure search operations over encrypted data stored on untrusted cloud servers. However, in some applications of cloud computing, there is a hierarchical access-privilege setup among users so that upper-level ...
Read More
Public key encryption with keyword search (PEKS) is a cryptographic primitive designed for performing secure search operations over encrypted data stored on untrusted cloud servers. However, in some applications of cloud computing, there is a hierarchical access-privilege setup among users so that upper-level users should be able to monitor data used by lower-level ones in the hierarchy. To support such situations, Wang et al. introduced the notion of hierarchical ID-based searchable encryption. However, Wang et al.'s construction suffers from a serious security problem. To provide a PEKS scheme that securely supports hierarchical structures, Li et al. introduced the notion of hierarchical public key encryption with keyword search (HPEKS). However, Li et al.'s HPEKS scheme is established on traditional public key infrastructure (PKI) which suffers from costly certificate management problem. To address these issues, in this paper, we consider designated-server HPEKS in identity-based setting. We introduce the notion of designated-server hierarchical identity-based searchable encryption (dHIBSE) and provide a formal definition of its security model. We then propose a dHIBSE scheme and prove its security under our model. Finally, we provide performance analysis as well as comparisons with related schemes to show the overall superiority of our dHIBSE scheme.
Fatemeh Pirmoradian; Mohammad Dakhilalian; Masoumeh Safkhani
Abstract
Internet of things (IoT) is an innovation in the world of technology. Continuous technological advancements based on the IoT cloud and booming wireless technology have revolutionized the living of human and remote health monitoring of patients is no exclusion. The Telecare Medicine Information Systems ...
Read More
Internet of things (IoT) is an innovation in the world of technology. Continuous technological advancements based on the IoT cloud and booming wireless technology have revolutionized the living of human and remote health monitoring of patients is no exclusion. The Telecare Medicine Information Systems (TMIS) is a system between Home Health Care (HHC) Organizations and patients at home that collects, saves, manage and transmits the Electronic Medical Record (EMR) of patients. Therefore, security in remote medicine has always been a very big and serious challenge. Therefore, biometrics-based schemes play a crucial role in IoT, Wireless Sensor Networks (WSN), etc. Recently, Xiong \textit{et al.} and Mehmood \textit{et al.} presented key exchange methods for healthcare applications that they claimed these schemes provide greater privacy. But unfortunately, we show that these schemes suffer from privacy issues and key compromise impersonation attack. To remove such restrictions, in this paper, a novel scheme (ECKCI) using Elliptic Curve Cryptography (ECC) with KCI resistance property was proposed. Furthermore, we demonstrate that the ECKCI not only overcomes problems such as key compromise impersonation attack in previous protocols, but also resists all specific attacks. Finally, a suitable equilibrium between the performance and security of ECKCI in comparisons with these recently proposed protocols was obtained. Also, the simulation results with the Scyther and ProVerif tools show that the ECKCI is safe in terms of security.
Aymen M. Al-Kadhimi; Salim A. Mohammed Ali; Sami Hasan
Abstract
This paper involves the design of asymmetrical generalized Chebyshev low-pass filter realized with a suspended substrate stripline. The study presents the synthesis and design of an asymmetrical prototype with a degree of 11, the cut-off frequency of 2.5 GHz, better than 26 dB as passband return loss ...
Read More
This paper involves the design of asymmetrical generalized Chebyshev low-pass filter realized with a suspended substrate stripline. The study presents the synthesis and design of an asymmetrical prototype with a degree of 11, the cut-off frequency of 2.5 GHz, better than 26 dB as passband return loss and a broad stopband rejection of 55 dB. The filter produces 11 transmission zeros (attenuation poles), one at infinity and 5 pairs located at finite frequencies offering better wide stopband attenuation performance as well as sharp selectivity. The filter is built based on suspended stripline structure (SSS) using aluminium as a cavity and with 2mm as a ground spacing. The filter measurements show a reasonable agreement has been achieved with the simulated response.An eleventh-order lowpass filter satisfying generalized Chebyshev response with asymmetrical topology, a wide stopband rejection and high selectivity has been presented. The filter has a total of 11 TZs, one is located at infinity and 5 pairs located at different finite frequencies contributing response improvements in both passband and stopband re- gions
Navid Vafaei; Maryam Porkar; Hamed Ramzanipour; Nasour Bagheri
Abstract
SKINNY is a lightweight tweakable block cipher that for the first time introduced in CRYPTO 2016. SKINNY is considered in two block sizes: 64 bits and 128 bits, as well as three TWEAK versions. In the beginning, this paper reflects our findings that improve the effectiveness of DFA analysis on SKINNY, ...
Read More
SKINNY is a lightweight tweakable block cipher that for the first time introduced in CRYPTO 2016. SKINNY is considered in two block sizes: 64 bits and 128 bits, as well as three TWEAK versions. In the beginning, this paper reflects our findings that improve the effectiveness of DFA analysis on SKINNY, then accomplishes the hardware implementation of this attack on SKINNY. Assuming that TWEAK is fixed, we first present the Enhanced DFA on SKINNY64-64 and SKINNY128-128. In order to retrieve the master key with the minimum number of faults, this approach depends on fault propagation in intermediate rounds. In our latest evaluations we can retrieve the master key with 2 and 3 faults in SKINNY64-64 and SKINNY128-128respectively. This result should be compared with 3 and 4 faults for 64-bit and 128-bit versions respectively, in the models presented in the former work. Using the glitch model as well as a set of affordable hardware equipment, we injected faults into various rounds of the SKINNY algorithm in the implementation phase. More accurately, we can inject a single nibble fault into a particular round by determining the precise timing of the execution sub-function.
Hanan Aljoaey; Khawla Almutawa; Ruyuf Alabdali; Dina M.Ibrahim
Abstract
Web application protection is today’s most important battleground between victim, intruder, and web service resource. User authentication tends to be critical when a legitimate user of the web application abruptly ends contact while the session is still active, and an unauthorized user chooses ...
Read More
Web application protection is today’s most important battleground between victim, intruder, and web service resource. User authentication tends to be critical when a legitimate user of the web application abruptly ends contact while the session is still active, and an unauthorized user chooses the same session to gain access to the device. For many corporations, risk detection is still a problem. In other cases, it is a usual way of operating that provides the requisite protection to keep the product free of weaknesses. Using various types of software to identify different security vulnerabilities assists both developers and organizations in securely launch applications, saving time and money.Different combinations of tools have been seen to enhance protection in recent years, but it has not been possible to combine the types of tools available on the market until the writing of this report. The aim of this paper is to clarify vulnerabilities in broken authentication and session management. It is worth noting that if the creator practices the preventive techniques outlined in this article, the chances of exploitation being discussed are reduced. This paperrevealed that the most powerful ways to exploit the Broken Authentication and Session Management vulnerabilities of the web application in those domains are the Session Misconfiguration assault and Cracking/ Guessing Weak Password. Correspondingly included techniques to defend authentication and the most important is using a robust encryption system, setting password rules, and securing the session ID.