Completed Projects

Computing research students are highly active and successful in a number of different subjects. The list below shows some work examples of finished research students.

Please contact the School of Computing (by email: admissions@buckingham.ac.uk or phone: +44 (0)1280 828 322) if you are interested in one thesis in particular or in more information about the research at Buckingham in general.

Authentication, Biometrics, Image Processing, Security / Privacy

Hisham Al-Assam
Entropy Evaluation and Security Measures for Reliable Single / Multi-Factor Biometric Authentication and Biometric Keys
Area of Study: Authentication, Biometrics, Image Processing, Security / Privacy
Award: DPhil Computing, 2013
Supervisors: Professor Sabah Jassim, Dr Harin Sellahewa

The growing deployment of biometrics as a proof of identity has generated a great deal of research into biometrics in recent years, and widened the scope of investigations beyond improving accuracy into mechanisms to deal with serious concerns raised about security and privacy due to the potential misuse of the collected biometric data along with possible attacks on biometric systems. The focus on improving performance of biometric authentication has been more on multi-modal and multi-factor biometric authentication in conjunction with designing recognition techniques to mitigate the adverse effect of variations in recording conditions. Some of these approaches together with the emerging developments of cancellable biometrics and biometric cryptosystems have been used as mechanisms to enhance security and privacy of biometric systems.

This thesis is designed to deal with these complimentary and closely related issues through investigations that aim at understanding the impact of varying biometric sample recording conditions on the discriminating information content (entropy) of these samples, and to use the gained knowledge to (1) design adaptive techniques for improved performance of biometric authentication, and (2) propose and test a framework for a proper evaluation of security of all factors/components involved in biometric keys and multi-factor biometric authentication.

The first part of this thesis consists of a set of theoretical and empirical investigations designed to evaluate and analyse the effect of emerging developments in biometrics systems, with a focus on those related to biometric entropy and multi-factor authentication. The analysis of different biometric entropy measures, proposed in the literature, reveals that variations in biometric sample quality lead to variations in the correlation between biometric entropy values calculated using any of the known measures and the accuracy of the biometric recognition. Furthermore, analysis of the spatial distribution of entropy values in face images reveals a non-uniform distribution.

The widely expected inherent individual differences in biometric features entropy will also be confirmed. Moreover, we uncover a myth reported in the literature about near perfect accuracy of certain quality-based adaptive recognition schemes. In particular, we demonstrate that the performance evaluation of such schemes have relied on some assumptions that cannot be achieved in operational scenarios unless certain requirements are met. Then, we describe a rigorous evaluation of security and accuracy of multi-factor biometric authentication schemes that rely on user-based transformation on biometric features. Finally, we provide a comprehensive security analysis of multifactor biometric cryptosystems based on biometric entropy analysis.

The second part of the thesis builds on the findings of the first part to develop a set of solutions and proposals based on biometric entropy and multi-factor authentication to improve security and reliability of biometric authentication systems. Firstly, motivated by the non-uniform spatial of entropy values, an incremental fusion of partial face recognition scheme has been developed to enhance partial as well as full-face recognition accuracy, whereby blocks of face regions are fused according to discriminative entropy ranking. We demonstrate that the incremental partial face recognition scheme requires as little as 30% of the face image blocks to achieve optimal accuracy. Secondly, we describe the use of adaptive quality-based feature extraction to enhance the accuracy of biometric-based authentication. Thirdly, to address security and privacy concerns about multi-factor biometric systems, we propose an efficient and stable orthonormal random projection scheme to be used for the generation of revocable biometrics. Fourthly, we propose a new hybrid measure to rigorously assess and quantify the security of biometric cryptosystems based on biometric entropy and other influencing factors. Finally, we propose three practical solutions of varying complexity and security level provided, for mobile and remote multi-factor biometric authentication.


Biometrics, Image Processing

Azhin Sabir
Gait Based Human Identification System with Covariate Analysis
Area of Study: Biometrics, Authentication
Award: DPhil Computing, 2015
Supervisors: Professor Sabah Jassim, Dr Naseer Al-Jawad

Biometric authentication of a person is highly challenging and complex problem. Gait as one of the behavioural biometric source has received a great interest from researchers due to its potential use in human identification. Gait can be affected by deferent covariate factors including footwear, clothing, carrying conditions, age, and walking speed.

My research is about human gait recognition with covariate analysis, we propose human a gait recognition system, based on the effects of the covariate factors including, clothing and carrying conditions, to be used in security purpose.

Read thesis


Makki Maliki
Writer Recognition Algorithms Using Features Extracted From Their Arabic Handwriting
Area of Study: Biometrics, Authentication, Image Processing
Award: DPhil Computing, 2015
Supervisors: Professor Sabah Jassim, Dr Naseer Al-Jawad

Arabic language is spoken/written by 400 million persons and about one billion may read and pray in Arabic too as Muslims.

Words in Arabic, Kurdish, Persian, Urdu and similar script languages consist of different types of characters. Some characters can be connected with its neighbours on one or both sides, while other characters may be completely disconnected from its neighbours. Indeed, words in such languages are aggregation of sub-words that consist of one or more characters. Moreover, some characters may have a diacritic that distinguish it from characters of the same shape or the same pronunciation.

Arabic handwriting text is less focused upon in biometrics research comparing with Latin. Currently, my research interest is to find the most attractive parts in the Arabic handwriting text to reflect the writer habit and style by developing robust techniques to maintain the identification system.

Read thesis


Rasber Rashid
Robust Biometric Feature Embedding Used for Remote Authentication
Area of Study: Biometrics, Authentication
Award: DPhil Computing, 2015
Supervisors: Professor Sabah Jassim, Dr Harin Sellahewa

Biometrics-based person identification systems have received much attention by the research community over the past decade and have wide-ranging applications in information security, law enforcement, surveillance, and access control systems to buildings and services. Privacy and security issues surrounding the collection, processing, storage and transmission of biometric data are of great concern to all stakeholders of biometric systems.

My research will focus on the principles and techniques in steganography to design novel solutions to embed biometric data in cover objects to securely transmit such sensitive data between two entities. The challenge is to ensure invisibility and robustness of the embedding scheme for a large payload while maintaining the recognition accuracy of the biometric system.

Read abstract


Wafaa Raheim Hussein
Correction Codes for Face Recognition in Uncontrolled Environments
Area of Study: Biometrics, Image Processing
Award: DPhil Computing, 2014
Supervisors: Professor Sabah Jassim, Dr Harin Sellahewa

Face recognition is one of the most desirable biometric-based identification / authentication systems. It is used to control access to sensitive information / locations or as a proof of a person’s identity, and it is an important tool in crime fighting and crowd control. It is widely recognised that automatic face recognition is a challenging task due to many factors including the very high dimensionality of the vector representation of face images. The challenge is particularly tough when conducted in uncontrolled situations. Despite decades of active research efforts, rapid advances in camera technology, and the significant number of different sophisticated mathematical digital face representations and recognition schemes, the progress on face recognition in uncontrolled situations is very limited. This thesis is concerned with face recognition in uncontrolled conditions by developing a mathematical model of digital face representation that allows the use of mathematical information theory techniques in communication that deal with the effect of distorted data during transmission.

Face recognition in uncontrolled environments is greatly affected by intra-class variation as a result of extreme variation in recording conditions (e.g. illumination, expression, poses, blurring or aging) at different times and locations. Capturing a face image of a person in different recording conditions results in different images with different pixel values and hence different face feature vectors. Here we shall term the differences in the two images as distortion. Many image normalisation and pre-processing techniques have been developed to deal with these variations, resulting in improved but not necessarily optimal performances. The various normalisation / pre-processing techniques do not necessarily result in removing, or significantly reducing, the image distortion of pixel values captured in different recording conditions. Consequently, the corresponding differences between the feature vectors of two face images of the same person but captured in different conditions / time is unlikely to be reduced as a result of applying these normalisation / pre-processing techniques.

We argue that an attempt to solve this problem is to develop a facial feature vector model scheme with a feature transformation that can be used to greatly reduce feature vector distortion resulting from different recording conditions. We shall use the knowledge that has been developed over the decades in information and communication theory, to motivate the development of binary face feature vector representation and model image distortion in terms of error in this binary feature vectors. Consequently, this thesis is aimed at developing such binary feature vector and testing the following hypothesis:

Error detection/correction codes provide an efficient alternative to image pre-processing normalization techniques in mitigating the effect of variations in recording conditions on automatic face recognition.

We shall first demonstrate that binarisation of coefficients of various subbands of wavelet decomposed face images provide the appropriate feature vector representation. In fact, we shall demonstrate that on most cases these feature vectors outperform the non-binarised versions. This also opens the way for using error correction techniques to model the feature vector distortions that correspond to variation of recording conditions. We shall focus on recording conditions that result from varying illumination, expression and image blurring / degradation conditions. The Haar filter of discrete wavelet transforms to a depth of 3 is the tool for face feature extraction, followed by global / local binarisation of coefficients in each subband. On its own, the use of binarised feature vectors will not eliminate intra-class variation in these binarised feature vectors. This aspect together with the observation that in communication systems, the distortion error (in terms of Hamming distances) is due to external vectors, is the motivation for the use of error detection / correction techniques for face recognition. This will reduce the intra-class variation inherent to uncontrolled environments. For that we have investigated the intra- and inter-class distributions of errors in binary face feature vectors extracted from different image windows / blocks of differing sizes as well as whole feature vectors and for different recording conditions. The proposed approach is tested for binarised wavelet templates in single and multi-streams. We have developed procedures which select appropriate error correcting codes (ECC) based on the statistical parameters of the intra- and inter-class error distributions in different size blocks (all or specific positions) or full image feature vectors. We shall demonstrate the validity of the above hypothesis for wide range of variation in illumination, for a variety of facial expression, and wide range of image blurring / degradation levels. The experimental results establish that using different ECCs for different blocks and different recording conditions (i.e. adaptive ECC selection) outperforms the non-adaptive schemes in a significant way as the level of illumination or blurring gets worse.


Ali Jassim Abboud
Quality Aware Adaptive Biometric Systems
Area of Study: Biometrics, Image Processing
Award: DPhil Computing, 2011
Supervisors: Professor Sabah Jassim, Dr Harin Sellahewa

In recent years there has been surge of interest by the research academic and industrial communities in biometric systems resulting in significant technological advances as well as the emergence of new challenges. Nowadays, biometric technology is sufficiently mature and is being applied in a variety of real-life, everyday applications including access control, ID cards, securing financial transactions, visual surveillance, identity management, etc. However, most recent investigations highlight a serious influence of biometric data quality on the performance of biometric systems. Quality assessment of biometric data is becoming a more active research area and is gaining special attention in the biometric community for its important role in improving performance and efficiency of identity management systems. Biometric data quality has an impact on the various processing stages (e.g. feature extraction, template selection, and matching). Incorporating quality information at each stage of the biometric system can help to achieve significantly improved performance. This thesis is focused on using face biometric quality measures to develop quality-based adaptive techniques. The main aim of these techniques is to boost the performance of the biometric system by incorporating biometric data quality awareness procedures and adapting verification/identification accordingly. Our investigations into adaptive quality-based face recognition consist of five closely related and complementary components:

  1. We first study state-of-art quality assessment of face images and describe a unified taxonomy for quality measures of face images highlighting their impact on matching accuracy. Quality-based adaptive normalization techniques will be developed and used to select the best way to restore reasonable quality. Adaptive enhancement technique will be shown better than the corresponding non-adaptive blind enhancement because the former avoids unnecessary enhancement that can cause noise and emergence of artefacts.
  2. We investigate the influence of biometric sample quality on Relative Entropy (RE) present in biometric data captured under different quality conditions. RE values of a user?s biometric features is the amount of information that distinguishes the user from a given population. We observe that severe degradation in the image quality may result in more than (75%) drop in RE values in face images. We shall also establish that different feature extraction techniques (e.g. PCA and different wavelet subbands) yield different RE values. We shall demonstrate that for each of the feature extraction techniques there is a strong positive correlation between RE and the accuracy of biometric system. These investigations also reveal individual differences in RE values which can be exploited to customise and improve face recognition.
  3. We propose an adaptive incremental fusion scheme to determine the optimal ratio (i.e. optimal subset of features) of partial face images for each different quality condition. We demonstrate that such a scheme is also useful for full face images to enhance authentication accuracy significantly. One of the important conclusions of this investigation is that the percentage of partial/whole facial images required to achieve the optimal performance of face recognition varies from (3%) to (80%) of the face image according to two criteria:
    • face image quality and
    • the available part of the face image.

    Interestingly, even for low face image quality, authentication accuracy can be improves significantly. Nevertheless, this scheme shows that the biometric features should be evaluated and selected adaptively, based on the quality of the biometric data.

  4. We investigate a quality-based clustering approach to template selection approach that adaptively selects an optimal number of templates for each individual. The number of biometric templates depends mainly on the performance of each individual (i.e. the gallery size should be optimized to meet the needs of the target individual). The benefits of adaptive biometric template(s) selection techniques include:
    • significant storage reduction
    • provision for noise tolerance and
    • provide a trade-off between required biometric system performance (i.e. accuracy) and the available storage resources.
  5. We propose a method to adjust the decision threshold of face recognition system adaptively based on the quality of input face images. Unfortunately, the performance of face recognition schemes under different quality conditions, reported in the literature, are evaluated using non-adaptive thresholds that are not practical in real-life applications. In fact, non-adaptive thresholding could become a source of attack that interferes with the verification through manipulating the recording condition.

We conclude that our investigations provide strong evidence for the use of quality-based adaptive face recognition schemes for improved performance and pave the way for the development of environment-aware recognition systems.


Ahmad Basheer Hassanat
Visual Words for Automatic Lip-Reading
Area of Study: Biometrics, Image Processing
Award: DPhil Computing, 2009
Supervisor: Professor Sabah Jassim

“Lip reading is used to understand or interpret speech without hearing it, a technique especially mastered by people with hearing difficulties. The ability to lip read enables a person with a hearing impairment to communicate with others and to engage in social activities, which otherwise would be difficult. Recent advances in the fields of computer vision, pattern recognition, and signal processing has led to a growing interest in automating this challenging task of lip reading. Indeed, automating the human ability to lip read, a process referred to as visual speech recognition, could open the door for other novel applications. This thesis investigates various issues faced by an automated lipreading system and proposes a novel “visual words” based approach to automatic lip reading. The proposed approach includes a novel automatic face localisation scheme and a lip localisation method.

The traditional approaches to automatic lip reading are based on visemes (mouth shapes (or appearances) or sequences of mouth dynamics that are required to generate a phoneme in the visual domain). However, several problems arise while using visemes in visual speech recognition systems such as the low number of visemes (between 10 and 14) compared to phonemes (between 45 and 53), Visemes cover only a small subspace of the mouth motions represented in the visual domain, and many other problems. These problems contribute to the bad performance of the traditional approaches, hence the visemic approach is something like digitising the signal of the spoken word, digitising causes a loss of information. While the proposed “visual words” considers the signature of the whole word rather than only parts of it. This approach can provide a good alternative to the visemic approaches to automatic lip reading.

The proposed approach consists of three major stages: detecting/localizing human faces, lips localisation and lip reading. For the first stage, we propose a face localization method, which is a hybrid of a knowledge-based approach, a template-matching approach and a feature invariant approach (skin colour). This method was tested on the PDA database (a video database, which was recorded using a personal digital assistant camera, contains thousands of video clips of 60 subjects uttering 18 different categories of speech in 4 different indoor/outdoor lighting conditions). The results were compared against a
benchmark face detection scheme, and the results indicate that the proposed approach to localize faces outperforms the benchmark scheme. The proposed method is robust against varying lighting conditions and complex backgrounds.

For the second stage, we propose two colour-based lips detection methods, which are evaluated on a newly acquired video database and compared against a number of state-ofthe-art approaches that include model-based and image-based methods. The results demonstrate that the proposed (nearest-colour) approach performed significantly better than the existing methods. The proposed visual words approach uses a signature (2-dimensional feature matrix) that represents an entire spoken word. The signature of a spoken word is an aggregation of 8 features. These include appearance-based features, temporal information and geometricbased features extracted from the sequence of frames that correspond to the spoken word.

During the word recognition stage, the signatures of two words are compared by first calculating the similarity of each feature of the two signatures, which produces 8 similarity scores, one for each feature. A match score for the two words is calculated by taking a weighted average (i.e. score level fusion) of these scores that is then passed on to a KNN classifier. Differences in the duration of a spoken word are dealt with by using either dynamic time warping and/or linear interpolation. A weighted KNN classifier is proposed to enhance the word recognition rate.

The proposed visual words recognition system was evaluated using a large video database that consists of different people from varying backgrounds, including native and non-native English speakers, and large experiment sets of different scenarios. The evaluation has proved the “visual words” superiority over the traditional visemic approach, which researchers used to use for this kind of problem. These experiments have shown many results such as that the lip reading problem is a speaker-dependent problem, some persons produce relatively weak visual signals while speaking (termed as visual speechless persons), the performance of a lip-reading system can be enhanced by using a language model, etc.

The proposed approach for visual speech recognition was applied to the speaker recognition tasks, which could be used for “visual passwords”. It was also applied to a lip-reading surveillance application. Initial experiments indicate promising results, which lays a strong foundation for future work.


Maysson Al-haj Ibrahim
Topology-based pathway enrichment and biomarker identification
Area of Study: Bioinformatics
Award: DPhil Bioinformatics, 2013
Supervisors: Dr Kenneth Langlands, Professor Sabah Jassim

The ability to profile genome-wide transcriptional changes with cDNA microarray technology has augured a revolution in biomedical science. Microarrays provide researchers with a tool to quantify temporal and spatial changes in gene activity in a cellular system in response to exogenous stimulation, or to identify changes in gene expression characteristic of a pathological state. This leads to improved disease diagnosis, prognostication and the development of more effective pharmacological intervention strategies. However, meaningful analysis of complex data generated by
microarrays remains a daunting task despite extensive effort to develop sophisticated analysis methods.

The research presented in the thesis concerns two different, but closely related, problems in microarray data analysis. The first is concerned with improving pathway enrichment analysis by exploiting knowledge of biological relevance, while the second is concerned with incorporating pathway analysis and other biological knowledge to improve the accuracy and the robustness of biomarker discovery.

This thesis argues that most typical pathway analysis methods tend to ignore rich gene expression information once a differentially-expressed gene set is identified. Moreover, existing methods tend to give no consideration to relationship information between transcripts. It is logical that the effective use of such information might inform and improve the identification of critical biological process if exploited properly. Moreover, traditional methods that apply classical feature selection methods to data to find those genes most discriminative of pathological states (i.e. biomarkers) also ignore relationship information. I argue that using functional enrichment to inform a biomarker selection algorithm will lead to better performance compared to statistical
filtering or correlation methods. This is of enormous relevance if molecular investigations are to inform the management of disease.

The research herein makes two key contributions to the field by developing new rational methods that consider the limitations of existing methods in order to yield biologically-relevant results. In the first part, I propose a new pathway enrichment score called the Pathway Regulation Score (PRS), in which both pathway topology and the magnitude of gene expression changes are considered. I argue that the PRS method provides a powerful initial filter in the enrichment of biologically-relevant information. To test the relevance and the reliability of my scoring system, I present a number of experiments conducted on publicly-available datasets representative of different pathological states. The experimental results showed that the inclusion of expression
and topological data in the assessment of pathway perturbation facilitated the discrimination of key processes. Moreover, I developed an open source publicly available software package to implement my proposed PRS method.

The second part of my work focused on the identification of “genetic signatures” characteristic of disease subtypes by the analysis of high-throughput transcriptional profiling data. I describe novel biomarker discovery methods that take the biological relevance of genes into account. This was achieved by integrating gene expression data with prior biological knowledge (i.e. by enriching biological pathways, from which subsets of genes were selected) to reveal a group of strongly-correlated genes that provide accurate discrimination of complex as well as simple disease subtypes.
Furthermore, I investigated the use of functional enrichment at two different stages to improve accurate and reliability of biomarker discovery. Finally, I supported my theoretical arguments by studying a number of experiments that compare biomarker identification methods based on a range of clinically-relevant datasets. These experiments confirmed that I was able to identify genetic signatures of greater prognostic relevance than those currently in the clinic.


Localisation, Software Design, Authentication, Wireless Communication

Ali Albu-Rghaif
Modelling and integrating GNSS (global navigation satellite system) signals for real-world simulation
Area of Study: Localisation
Award: DPhil Computing, 2015
Supervisor: Dr Ihsan Lami

The aims of my project are to devise a novel algorithm that will combine the GNSS (GPS, Galileo & Glonass) signals and process them so to help achieve better localisation in difficult signal reception conditions. Better localisation are achieved in terms of the location of the actual Smartphone being more accurate, faster time to fix, and/or reduce the amount of computation/thrashing required to acquire / track these available GNSS signals. The new/novel algorithm can be implemented by any Smartphone Localisation solution designer at the RF front-end and in a software receiver, i.e. this algorithm would help the designer to build a more reliable and robust multi-GNSS receiver and will enhance the user’s localisation experience. As well as model/simulate these signals at various real-world scenarios.

Two algorithms have been developed; an early GNSS available-signal detection at the front-end of any such GNSS receiver’s solution, and a dynamic GPS signal acquisition based on compressive sensing (CS) for indoors & outdoors environments.

Currently, we are developing a new GNSS signals acquisition by using CS, where the design of the dictionary matrix (Bank of Correlators) will has much smaller dimension than the current dictionaries that use in CS-GPS methods.

Read thesis


Maher Al-Aboodi
Using Partial Differential Equations (PDE’s) to devise a method and process timeshare received signals from various wireless technologies
Area of Study: Localisation, Wireless Communication
Award: DPhil Computing, 2016
Supervisor: Dr Ihsan Lami

My research focuses on using the same receiver chain (RF and digital) to handle a number of wireless signals. The main aim of the investigation is achieving efficiency through much valued saving in design and cost of implementing multiple implementations of these technologies side-by-side on the same receiver chip. This project will have many interesting and innovative implications. For example, I will investigate if the signals from both Bluetooth and GPS can be time-sliced so to use the same receiver RF and Baseband chain to capture and process both signals. This will reduce the cost and efforts of implementing Location based services on a mobile phone, for doing navigation, for instance. The time-slicing procedure relies on solving PDE’s that model the integration of the various functionalities.

Read thesis


Halgurd S Maghdid
Hybridization of GPS and Wireless Technologies to Offer Seamless Outdoor / Indoor Positioning for LBS Applications
Area of Study: Localisation, Wireless Communication
Award: DPhil Computing, 2016
Supervisor: Dr Ihsan Lami

Smartphone manufacturers are mounting several wireless technologies including GPS, Cellular, Wi-Fi, and Bluetooth. And other different GNSS technologies have been rolled out such as GLONASS (Russia), Galileo (Europe), IRNSS (Indian), QZSS (Japan), and Compass (China). Despite all of these technologies for wireless positioning solution, still they do not reach a seamless Smartphones outdoors/indoors positioning. My research focused on all of these solutions and technologies to hybridization and/or to combining them for a solution that based on available Smartphone localisation techniques. Currently, the research studies hybridizing GPS with WiFi technology to offer continuous Smartphone positioning from outdoor to any indoors. My proposal does not require dedicated hardware (host server, sensors, and calibration) that are typically associated with indoors solutions. Therefore, our scheme shall reduce the required memory and traffic on Smartphones thus saving battery consumption, connection/interaction traffic and processing time.

Read thesis


Torben Kuseler
Localisation and obfuscation techniques for enhanced multi-factor authentication in mCommerce applications
Area of Study: Localisation, Software Design, Authentication
Award: DPhil Computing, 2012
Supervisor: Dr Ihsan Lami

The focus of this thesis is to investigate solutions that shall enhance the security of remote client authentication for mCommerce applications on phones such as Smartphones or Tablet-PCs. This thesis details three innovative authentication schemes developed during the course of this study. These schemes are based on the use of localisation and obfuscation techniques in combination with multi-factor authentication to enforce the knowledge of “who, when, where and how” necessary for any remote client authentication attempt. Thus, assuring the mCommerce service provider about the genuine client as well as ensuring correct capturing and processing of the client’s authentication data on the remote phone. The author of this thesis believes that these schemes, when developed on commercial mCommerce applications, shall enhance the service provider’s trust into the received client data and therefore shall encourage more service providers to offer their mCommerce services via phone applications to their clients. The first proposed scheme, called MORE-BAILS, combines multiple authentication factors into a One-Time Multi-Factor Biometric Representation (OTMFBR) of a client, so to achieve robust, secure, and privacy-preserving client authentication. Tests and trials of this scheme proved that it is viable for use in the authentication process of any type of mCommerce phone applications. The second and third schemes, called oBiometrics and LocAuth respectively, use a new obfuscated-interpretation approach to protect the mCommerce application against misuse by attackers as well as to ensure the real-time and one-time properties of the client’s authentication attempt. The novelty of combining biometric-based keys with obfuscated-interpretation tightly binds the correct mCommerce application execution to the genuine client. Furthermore, integration of the client’s current location and real-time in the LocAuth challenge / response scheme eliminates the risk that an attacker can illegitimately re-use previously gathered genuine client authentication data in a replay attack. Based on appropriate criteria, the MORE-BAILS, oBiometrics and LocAuth levels of security, user-friendliness and algorithms’ ease-of-implementation are proven in experiments and trials on state-of-the-art Android-based Smartphones.

Read abstract


Compressive Sensing

Nadia Al-Hassan
Super Resolution and Compressive Sensing for Face Recognition
Area of Study: Biometrics, Compressive Sensing
Award: DPhil Computing, 2015
Supervisors: Professor Sabah Jassim, Dr Harin Sellahewa

My research mainly focuses on using Super Resolution (SR) techniques to overcome the resolution limitation and to construct high-resolution images for face recognition in uncontrolled condition where low-resolution face image has a significant impact on the performance of face recognition systems and recognition accuracy drops dramatically without enhancing the resolution of the captured images. Our work aim is to exploit the recently developed compressive sensing (CS) theory to develop scalable face recognition schemes that do not require training by designing new deterministic dictionaries that are independent on image sets and satisfy the property of CS as well. These kinds of dictionaries can be used as alternative to the existing super resolution dictionaries.

Read thesis


Aras T Asaad
Successful and Failing Matrices for L1-Recovery of Sparse Vectors
Area of Study: Compressive Sensing
Award: MSc Computer Science, 2012
Supervisor: Dr Stuart Hall

In this thesis we give an overview of the notion of compressed sensing together with some special types of compressed sensing matrices. We then investigate the Restricted Isometry property and the Null Space property which are two of the most well-known properties of compressed sensing matrices needed for sparse signal recovery. We show that when the Restricted Isometry constant is ‘small enough’ then we can recover sparse vectors by L1-minimization. Whereas if the Restricted Isometry constant is ‘large’, we show that L1-minimization fails to recover all sparse vectors.


Cloud Computing

Nahla Fadel Alwan
The Readiness of Mobile Operating Systems for Handling Cloud Computing Services
Area of Study: Cloud Computing
Award: MSc Computing, 2011
Supervisor: Dr Ihsan Lami

This thesis provides a detailed introduction to Cloud Computing as well as a brief comparison between four of the mobile OSs focusing on the Android OS.

The key to this thesis is the criteria of Cloud Services which is used to evaluate the readiness of mobile OS (represented by Android OS) to handle Cloud Services.

In order to achieve this I will start by giving an explanation of Cloud Services and its criteria. I will then focus on REST and SOAP which are the main cloud protocols, make a comparison between the two, after which I will select REST for use in a practical demonstration. Then I will describe Android OS in detail.

In the next part I will display the related libraries, found in the Android OS, which support the general criteria of Cloud Services on the client side (Android OS).

The practical part will contain the Android application to display the support of this mobile OS to the criteria of Cloud Services in client side.

Finally I will conclude by assessing the effectiveness of Cloud Service as a way of meeting the mobile user’s need.


Wireless Communication

Ali Al-Sherbaz
Wimax-Wifi Techniques for Baseband Convergence and Routing Protocols
Area of Study: Wireless Communication
Award: DPhil Computing, 2010
Supervisors: Professor Chris Adams, Dr Ihsan Lami

The focus of this study was to investigate solutions that, when implemented in any heterogeneous wireless network, shall enhance the existing standard and routing protocol connectivity without impacting the standard or changing the wireless transceiver’s functions. Thus achieving efficient interoperability at much reduced overheads. The techniques proposed in this research are centred on the lower layers. This is because of the facts that WiMax and WiFi standards have not addressed the backward compatibility of the two technologies at the MAC and PHY layers, for both the baseband functions as well as the routing IP addresses. This thesis describes two innovate techniques submitted for a PhD degree. The first technique is to combine WiMax and WiFi signals so to utilise the same “baseband implementation chain” to handle both of these technologies, thus ensuring ubiquitous data communication. WiMax-WiFi Baseband Convergence (W2BC) implementation is proposed to offer an optimum configurable solution targeted at combining the 802.16d WiMax and the 802.11a,n WiFi technologies. This approach provides a fertile ground for future work into combining more OFDM based wireless technologies. Based on analysis and simulation, the W2BC can achieve saving in device cost, size, power consumption and implementation complexity when compared to current side-by-side implementations for these two technologies. The second technique, called “Prime-IP”, can be implemented with, and enhance, any routing protocol. During the route discovery process, Prime-IP enables any node on a wireless mesh network (WMN) to dynamically select the best available route on the network. Prime-IP proposes a novel recursive process, based on prime numbers addressing, to accumulate knowledge for nodes beyond the “neighbouring nodes”, and to determine the sequence of all the “intermediate nodes” used to form the route.


Image Processing

Hanan Al-Jubouri
Fusion Methods for Content-Based Image Retrieval
Area of Study: Image Processing
Award: DPhil Computing, 2015
Supervisors: Professor Sabah Jassim, Mr Hongbo Du

My research area is Content-Based Image retrieval (CBIR). Nowadays, advances in multimedia technology have led to a huge number of digital images being captured and stored on computers. Therefore, digital image processing has been an essential part of many scientific fields such as medicine, biology, astronomy, forensics, and computer vision. As a result, there is a growing demand for effective retrieval of images based on their visual content–Content-based Image Retrieval (CBIR)–when textual annotation of images is either unavailable or difficult to obtain.

CBIR is about searching for images in a database that have similar visual content to a query image. The main challenge is the so-called semantic gap between high-level concepts such as image category and low-level features such as colour, texture, and shapes (i.e. visual content) which results in irrelevant images being retrieved due to many factors (e.g. shortcoming of features and similarity measures). Therefore, my research focuses on investigating different local features, clustering algorithms; fusion methods (e.g. score-level fusion, data-level fusion, and clustering ensembles). We hope to develop an algorithm to tackle the challenge of the semantic gap.

Read thesis


Naseer Al-Jawad
Exploiting Statistical Properties of Wavelet Coefficients for Image / Video Processing and Analysis Tasks
Area of Study: Image Processing
Award: DPhil Computing, 2009
Supervisor: Professor Sabah Jassim

“In this thesis the statistical properties of wavelet transform high frequency subbandshas been used and exploited in three main different applications. These applications are; Image/video feature preserving compression, Face Biometric content based video retrieval and Face feature extraction for face verification and recognition. The main idea of this thesis was also used previously in watermarking (Dietze 2005) where the watermark can be hidden automatically near the significant features in the wavelet sub-bands. The idea is also used in image compression where special integer compression applied on low constrained devices (Ehlers 2008). In image quality measurement, Laplace Distribution Histogram (LDH) also used to measure the image quality. The theoretical LDH of any high frequency wavelet sub-band can match the histogram produced from the same high frequency wavelet sub-band of a high quality picture, where the noisy or blurred one can have a LDH which can be fitted to the theoretical one (Wang and Simoncelli 2005).

Some research used the idea of wavelet high frequency sub-band features extraction implicitly, in this thesis we are focussed explicitly on using the statistical properties of the wavelet sub-bands in its multi-resolution wavelet transform. The fact that each high frequency wavelet sub-band frequencies have a Laplace Distribution (LD) (or so called General Gaussian distribution) has been mentioned in the literature. Here the relation between the statistical properties of the wavelet high frequency sub-bands and the feature extractions is well established. LDH has two tails, this make the LDH shape either symmetrical or skewed to the left, or the right. This symmetry or skewing is normally around the mean which is theoretically equal to zero. In our study we paid a deep attention for these tails, these tails actually represent the image significant features which can be mapped from the wavelet domain to the spatial domain. The features can be maintained, accessed, and modified very easily using a certain threshold. Automating the feature extraction came as a result of correlating the threshold with the standard deviation (STD) for each high frequency sub-band. It is very preferable for most applications if the features can be extracted automatically. This automatic feature location/extraction used in image feature preserving compression, where bigger compression symbol size is given to the significant coefficients compared to the non-significant coefficients to be given relatively small compression symbols.

When using video sequence, the STD of the corresponding wavelet high frequency sub-bands were used to measure the differences between images, the fact that the adjacent frames (images) of the video sequence have limited variation is used well to reduce the number of used Huffman trees. The same coding symbols result produced from the first frame can be used to compress the next frame. Also the idea of using theoretical LDH generated from the first frame statistical properties were also discussed and tested in the proposed video compression.

In face Biometric content based video retrieval, another idea based on using the average of level k wavelet sub-bands combined with the corresponding STDs of the high frequency sub-bands in all levels across the video were used as a signature for the entire video. This signature is used to retrieve videos that belong to the same person taken at different time. BANCA database has been used to test the idea.

Different distance functions have been tested and different weights have been applied when combining the average wavelet sub-bands with the STDs. The average retrieval score was 5.25 out of 7 possible videos retrieved from the 416 videos in the database.

In face feature extraction, the idea of using the STD as a threshold is used to extract the facial features from the wavelet high frequency sub-bands. The features have been extracted and treated differently. The vertical features have been used to fine tune the left and the right edge of the face, the horizontal features used to create two main profiles, the first horizontally for both eyes and the second vertically for the eyes, nose, and mouth. Dynamic Time Warping method was applied as a distance function to find the similarity of these profiles with a given template. The success rate of 89% was tested using the face feature ratio.


Areej Polina
Wavelet Based Approaches to Content-based Video Indexing and Retrieval for Multimedia Applications
Area of Study: Image Processing
Award: MSc Computer Science, 2009
Supervisors: Professor Sabah Jassim, Dr Harin Sellahewa

Content Based Video Retrieval (CBVR) is a technology that aims to organize large digital video archive using their visual content. CBVR is a common research challenge in different fields such as computer vision, machine learning, information retrieval, human computer interaction, database systems web and data mining. The rapid advances in World Wide Web opened the way to remotely access information stored in a variety of digital libraries that are dispersed over the vastness of the Internet. User‘s queries often require the manipulation of huge amounts of image data using somewhat limited bandwidth. Efficiency requirements add to the need for extremely fast content-based image indexing systems that work well within time and bandwidth constraints. The difficult part of the problem is to construct a feature vector that both represents the image/video content and yet is efficient for searching. Due to vast amount of video content, most existing CBVR systems still target at image indexing. Our research aims to investigate wavelet-based of CBVR systems for videos in certain categories of applications. Using three filters of a wavelet transform. i.e. Haar, Daubechies_4 and Daubechies_8, which have been experienced in the department of applied computing for indexing face biometric videos, we shall test their viability for indexing certain categories of videos. The system has been implemented using these three filters on PDA- CBVR database to depth four. Initial results from pilot testing indicate marked success but further modification and experiments are needed. The image/video indexing system, under development, is based on using low and/or high frequency subbands of wavelet-transformed images or video frames at a certain level of decomposition as well as statistical parameters in other high frequency subbands. The system also uses colour variation over spatial extent for videos in a manner that provides meaningful video comparison. The wavelet coefficient in the lowest frequency bands and their statistical parameters are stored as a feature vector, which is much smaller in size than the original image. For matching or measuring similarity we use a number of known distance/ similarity functions such as the Euclidian distance function, quality index and statistical distance/score function. The nearest neighbour criterion is used for classification and retrieval.


Johan Hendrik Ehlers
Definition Driven Image Processing For Constrained Environments
Area of Study: Image Processing
Award: DPhil Computing, 2008
Supervisor: Professor Sabah Jassim

Pervasive computing devices are increasingly deployed to complement and support communication and information systems, propelled by low cost mobile devices that have modest communication, computational and sensing capabilities. Such devices are perfect for personal usage for anytime-anywhere computing, or for speciality tasks that includes unsafe or remote tasks. Fitted with optical and audio sensors, these devices are deployed in a variety of exciting applications including entertainment, communication, high-speed photography, health care, remote telemedicine, law-enforcing activities, security surveillance, disaster rescue management, financial transactions, reconnaissance and biometrics-based authentication. Efficient video processing forms the backbone for many applications.

This thesis is concerned with the development of efficient real time video processing techniques to enable and motivate these and new applications. We propose efficient and adaptable techniques in relation to video processing under a constraint, low end and pervasive computing device. Investigations are made towards the following common requirements; capturing high definition data, processing the data for information retrieval and finally fast buffer or data compression. In this thesis, we have primarily investigated and developed video related techniques that require minimal use of memory and are most suitable for implementation on memory constrained devices such as mobile phones. We have developed and implemented procedures that led to significant increase in audio and video data capturing rates, crucial for a multi-modal biometric authentication based system. The developed system maximizes the use of on-board memory and thereby enabling processing higher definition video and audio data.

Wavelet transforms are currently the most fitting signal analysis tool for real time environments. Various libraries have been developed that support a variety of wavelet filters that are used in various research applications or as prototypes for standards such as the JPEG2000 recommendation. However, classical implementations of wavelets involve significant amount of memory access and copying which increase with the length of the underlying filters. We have investigated and developed a new wavelet library which is based on the alternative implementation of the lifting scheme, suitable for low-end devices. We demonstrate that the lifting scheme can be used to adapt filters in order to have the highly desirable feature of precision preservation, for both faster transformations and lower memory usage. The precision preservation property has not received much attention but is crucial for conditions on devices with extreme low memory constraints and processing ability. We shall present the results of various experiments to demonstrate the suitability and efficiency of implementing various wavelet-based tasks, incorporated in this library, on a commercially available PDA of modest capabilities.

We shall finally develop wavelet transforms based on the use of modular arithmetic over the commutative ring 0Z256 which would can be applied without the need for any added memory. Lossless compression that uses such a wavelet are very efficient but at a somewhat less than optimal compression ratio. However, we shall demonstrate that incorporating this efficient lossless compression within the recording process can be used to expand the capabilities of constrained devices by using the memory saved, through this compression, for other processing tasks including longer recording sessions.


Biometrics, Image Processing, Security / Privacy

Harin Sellahewa
Wavelet-Based Automatic Face Recognition for Constrained Devices
Area of Study: Biometrics, Image Processing, Security / Privacy
Award: DPhil Computing, 2006
Supervisor: Professor Sabah Jassim

Rapid improvements in computing technology over the past two decades has now made it possible to perform real time, biometric-based automatic human recognition. Increase in criminal activities based on identity theft and international terrorism has been major driving force to improve the accuracy as well as efficiency of biometricbased recognition systems. Due the unobtrusive nature of capturing facial images, human faces remains the most natural and suitable biometric feature to be used for automatic recognition in many applications. This thesis is concerned with Automatic Face Recognition (AFR), which is a challenging task due to many varying conditions. Here we present an efficient and robust approach to automatic face recognition based on wavelet transforms that can be implemented on computationally constrained devices such as smart cards, 3rd Generation (3G) mobile phones and Personal Digital Assistant (PDA).

In existing approaches to face recognition, the entire high-dimensional face image is statistically analyzed to obtain a low-dimensional feature vector that best describes the given face image. Face images are first linearly transformed into a low-dimensional subspace and then are represented as compact feature vector in this new subspace. Typical dimension reduction techniques are based on Principle Component Analysis (PCA) Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA). These methods require a relatively large number of training images to create a good subspace. Although the face images are represented by a set of low-dimensional feature vectors, the subspace data (e.g. the basis vectors of the subspace) is also required during the recognition process. This leads to an increase in both storage and computations during recognition. Such requirements cannot be provided by existing constrained devices for real-time recognition, specially when these devices support the functions of other applications. Due to these constraints, the implementation of existing state-of-the-art face recognition schemes on constrained devices is not feasible. Wavelet transforms provide an alternative to or can be used as an initial dimension reduction step before applying other dimension reduction techniques such as PCA. This greatly reduces the image size, which in turn leads to a significant improvement in efficiency.

The Wavelet Transform (WT) is a technique for analysing finite-energy signals at multi-resolutions. It provides an alternative tool for short time analysis of quasistationary signals, such as speech and image signals, in contrast to the traditional short-time Fourier transform. The Discrete Wavelet Transform (DWT) is a special case of the WT that provides a compact representation of a signal in time and frequency that can be computed very efficiently. A wavelet-transformed image is decomposed into a set of subbands with different resolutions, each represented by a different frequency band. These components capture different aspects of the same face image and we studied the use of each of the frequency band (a single-stream) at different resolutions as a representation of a given face image. We investigated the effects of varying lighting conditions, facial expressions and eyeglasses have on the face features based on different subbands of the wavelet transformed image.

A large number of experiments were conducted to evaluate the performance of wavelet-based face recognition. Publicly available face databases such as BANCA, JAFFE, AT&T (ORL) and Yale, a recently acquired database using a PDA and a collection of our own facial videos were used for these experiments. We compared the performance of different wavelet filters and at different decomposition levels. We tested the effect of different nearest-neighbour (NN) classification methods and illumination normalization have on the performance of the presented wavelet-based recognition scheme. A novel approach to illumination normalization based on gamma correction is also presented. The results of identity verification experiments show that the use of wavelet features has better or comparable results to other known schemes. The results of identification shows that applying PCA in the wavelet-domain has comparable results to the common approach of applying PCA in the spatial-domain, but the former is significantly more efficient than the latter. However, wavelet features can also be used (without applying PCA to further reduce the dimensionality) at the expense of additional storage (for a large database of identities), but the advantage of wavelet-only features is that new faces can be added to the collection of existing faces without having to recreate a new subspace.

Experimental results of the single-stream face recognition showed that the different characteristics of individual subbands can be exploited to get better recognition accuracy at different operating scenarios (e.g. controlled/uncontrolled lighting conditions, a cooperative/uncooperative user). Based on these observations, we investigated a multistream approach to face recognition based on match score fusion. The significance of the proposed wavelet-based face recognition is its efficiency and suitability for platforms of constrained computational power and storage capacity.

In addition to efficiency, the wavelet-based methods achieve comparable recognition accuracies to many of the existing methods. Moreover, working at or beyond decomposition level 3 subbands results in robustness against high rate compression and noise interference, which will be an advantage when trying to identify faces captured in video footages.


Image Processing, Security / Privacy

Martin Dietze
Second Generation Image Watermarking in the Wavelet Domain
Area of Study: Image Processing, Security / Privacy
Award: DPhil Computing, 2005
Supervisor: Professor Sabah Jassim

“Robust image watermarking aims to embed invisible information, typically for copyright protection applications, in images in a way that the watermark is robust against various image processing attacks. Such attacks can be divided into signal processing and geometric attacks leading to different requirements for achieving robustness against them. This thesis investigates approaches to robust image watermarking focusing on the type of watermarking techniques termed as “second generation watermarking”. This class of watermarking schemes increase robustness against geometric attacks by including the use of the image’s perceptual features into the marking/detection process. Additional focus is put on the wavelet transform and its properties relevant for applications in robust image watermarking.

Based on a comparative study of 11 wavelet filters and 2 embedding techniques on their suitability to achieve robustness against 3 signal processing attacks at acceptable image quality, factors for the optimal choice of filter and embedding technique for DWT-based robust watermarking are presented. Though the different filters’ performance largely depends on the kind of attack (which is usually beyond the watermarker’s control) and the embedding technique, there is in fact one filter with good all-round capability with respect to the two — usually contradicting — requirements of maintaining good image quality and achieving robustness against attacks. This is particularly significant because this filter is relatively little known among watermarkers and has thus hardly if at all yet been used in watermarking applications. Within the course of this study, a new method to compare the original and the read watermark (both binary images) was developed.

This method uses the wavelet domain’s multiresolutional property and mimics the way a human would decide about a watermark’s quality. A novel wavelet-based method of applying the so-called “dual channel concept” for second generation watermarking schemes is presented. The dual channel concept is a measure to avoid interference of the watermark with the feature detection performed before detection. While the original dual channel concept was restricted to use on colour images in the spatial domain, representing images in a multi resolution Pyramid wavelet decomposition as channels is proposed, thus allowing to use this technique for any kind of image. Experimental results show that this application leads to robustness improvements in many cases; it can even be used to optimise existing watermarking schemes operating on a Pyramid decomposition of the image in the DWT domain.

A new approach to second generation watermarking is presented. Instead of performing the same feature detection before embedding and reading the mark, we propose to use a training-based feature recognition method. This has the advantage of avoiding the capacity-limiting split of the image in logical channels, and using the watermark as part of the feature used for later detection instead of considering it as interference. The result is a watermarking scheme with improved robustness due to more reliable location of embedding positions in the marked image. Experiments show that this technique already achieves remarkable robustness against sophisticated combinations of image processing attacks. The experience from this research suggests that full robustness against the commonly used watermarking benchmarks that still defeat most of today’s watermarking schemes is feasible.


Image Processing, Medical Imaging

Shan Khazendar
Towards Computer Based Systems for Classification of Human Ovarian Tumours Based on Medical Ultrasound Images
Area of Study: Image Processing, Medical Imaging
Award: DPhil Computing, 2016
Supervisors: Professor Sabah Jassim, Mr Hongbo Du

Ovarian cancer is one of the acute diseases of our modern society. Ultrasound images of the abdomen area of the human body reveal images of cysts that can assist diagnosis of the presence of ovarian cancer. However, in reality, inexperienced ultrasound operators, even doctors, have difficulties in differentiating among different types of cysts and often have to rely on biopsies/surgery to make a final decision. My research area is concerned with ultrasound image processing. The overall aim of the research is to use and/or develop sophisticated image processing techniques in analysing ultrasound images and apply novel classification solutions to the results of the image analysis in determining different types of cysts accurately. The research work mainly consists of:

  1. Gaining a systematic understanding of the current research in ovarian cancer diagnosis, the state-of-the-art techniques related to ultrasound image processing and effective solutions in image classification
  2. Developing new techniques and methods both in image processing and classification that are specifically relevant to ultrasound image understanding
  3. Constructing a prototype software tool that applies the sophisticated techniques and methods in automatic identification and severity scoring of cysts developed in (b).

The outcome of this research will enable more accurate diagnosis of ovarian cancer and at the same time minimise the need for biopsies.

Read abstract


Taban F Majeed
Computer-Aided Detection and Diagnosis in Digital Mammography
Area of Study: Image Processing, Medical Imaging
Award: DPhil Computing, 2016
Supervisors: Dr Harin Sellahewa, Dr Naseer Al-Jawad

Breast cancer is the most common form of cancer among women worldwide and its early detection does improve the chances of successful treatment and recovery. The use of the computer systems to assist clinicians in digital mammography image screening has advantages over traditional methods.

The main aims of my research area are:

  • Developing the computer-aided diagnosis schemes to assist radiologists in diagnosing breast cancer from mammograms and provides both effective and efficient improvements to existing algorithms, which segment mammogram images and locate mass lesions.
  • Design and develop novel solutions to enhance digital mammograms and extract features for effective detection and classification of breast cancer.
  • Providing a new algorithm to evaluate and report the results for mass lesion detection.

Read thesis


Jinming Ma
Wavelet Based Images/Video Compression Techniques for Telemedicine Application
Area of Study: Image Processing, Medical Imaging
Award: DPhil Computing, 2002
Supervisor: Professor Chris Adams, Professor Sabah Jassim

The advent of multimedia computing has lead to an increased demand for digital images and videos (sequence of frames). Telemedicine is a major application of digital image processing and is increasingly deployed over different venues. However, there are still many challenges associated with this technology, especially on high quality real-time video compression.

Wavelet transform has been becoming important in image compression applications because of its flexibility and efficiency in representing non-stationary signals. Generally, wavelet based compressions do not result in blocky effects and/or artifacts. However, existing wavelet-based compression systems are computationally intensive for real-time transmission of videos. In this thesis, we propose a novel region of interest (ROI) wavelet-based real-time video compression technique that is especially suited for telemedical applications. We shall demonstrate that with currently available and affordable hardware, the proposed ROI-based compression technique can simultaneously meet the stringent constraints imposed by the special requirements of telemedicine on processing time, bandwidth and high quality. The region of interest (e.g. the area of medical examination/surgery) in the decompressed images will have the highest possible image quality throughout the transmission at the expense of the unimportant outside region.

Corresponding to the novel ROI-based wavelet transformation, suitable quantization and coding schemes are explored and incorporated into a complete video compression and transmission system. Factors that influence the ROI system performance with respect to computation time, compression ratio and video quality, are investigated with the aim of providing relevant suggestions on how to achieve desired performances. Software and user requirement factors include video frame size, the size of the ROI, the choice of wavelet filter, the scheme and depth of decomposition, and the quantization and coding schemes. Other performance influencing factors relate to the implementation platform, such as the local machine performance and network bandwidth. Comprehensive testing on the various software influencing factors and the CPU speed will be carried out to demonstrate the viability of the ROI system on currently available technology.


Software Design

Suleyman Al-Showarah
Usability of Smartphones for elderly people
Area of Study: Software Design
Award: DPhil Computing, 2015
Supervisors:  Dr Naseer Al-Jawad, Dr Harin Sellahewa

I am working on the field of usability of smart phone for elderly people with consideration of experience in using the smart phone. This field is considered to be new, and still open for research. The research also involves testing smart phone response level of elderly people compared with other age groups by using different metrics-based on eye movement and touch screen of smart phone applications I am collecting data for different types of gesture from different age groups to be used in conducting my experiments.

Read thesis


Jeremy Malcolm Randolph Martin
The Design and Construction of Deadlock-Free Concurrent Systems
Area of Study: Software Design
Award: DPhil Computing, 1996
Supervisors: Dr Ian East, Professor Sabah Jassim

It is a difficult task to produce software which is guaranteed never to fail, but it is a vital goal for which to strive in many real-life situations. The problem is especially complex in the field of parallel programming, where there are extra things that can go wrong. A particularly serious problem is deadlock. Here we consider how to construct systems which are guaranteed deadlock-free by design.

Design rules, old and new, which eliminate deadlock are catalogued, and their theoretical foundation illuminated. Then the development of a software engineering tool is described which proves deadlock-freedom by verifying adherence to these methods. Use of this tool is illustrated with several case studies. The thesis concludes with a discussion of related issues of parallel program reliability.


Biometrics

Abdulbasit Al-Talabani
Emotion recognition form speech signal
Area of Study: Biometrics
Award: DPhil Computing, 2016
Supervisors: Dr Harin Sellahewa, Professor Sabah Jassim

Designing a model to recognise emotion in speech automatically is the main aim of my research. Recorded speech by people saying different sentences in different emotions are used to train the model. The changes in different parameters of the digital speech signal while saying emotional utterances can carry significant information about emotion. The focus of this study is extracting and selecting relevant features, dimension reduction, and using suitable classifiers. Emotional speech feature analysis can tell us a lot about the nature of the emotion in term of spontaneity, culture, and other factors that influence the emotion.

Read thesis


Databases

Adam Floyd
Recommendations for the Design and Implementation of Non-Relational Database Solutions
Area of Study: Databases

My current area of research concerns the world of unstructured data and non-relational databases, often known collectively as ‘NoSQL’. This a fairly new area of computing that is evolving along with the need to store and manipulate new types of data in large quantities, and unlike the relational model, which has benefitted from several decades of refinement and standardisation, there is still much uncertainty in the non-relational landscape. This research aims to provide an objective analysis of the many different technologies such as key-value stores, document stores, object-oriented and graph databases that fall under the general classification of ‘non-relational’, with a particular emphasis on how they meet the challenges of storage and retrieval of unstructured data. This will clarify the landscape and lead to a set of recommendations for good practice in design and implementation of such technologies to maximise their efficiency and usability.

I am also responsible for the development of the Enhanced Database Normalisation Automator or EDNA, a software tool for relational database designers with both commercial and educational benefits. Begun as an undergraduate project here at Buckingham and further developed since, this software automates the time-consuming and highly-skilled manual process of database design by normalisation. EDNA can produce a decomposed relation schema in second, third or Boyce-Codd normal form from a user-defined set of attributes and functional dependencies in a matter of seconds, before allowing the user to create this schema as a physical table structure in one of the popular database management systems.

EDNA is a software tool for automated design of relational databases by normalisation, and can quickly produce decomposed relation schemas in second, third or Boyce-Codd normal forms from a user-supplied set of attributes and functional dependencies. It can then produce this logical schema as a physical schema in one of the following database management systems:

  • Microsoft Access
  • Microsoft SQL Server
  • MySQL
  • Oracle

EDNA has been developed in Visual C# using an object-oriented approach as a Windows Forms application linked to a dynamic link library (DLL) containing the normalisation functionality. Please note that the software is still in a very early beta form and is provided strictly ‘as is’ with no warranty given or implied and no guarantee of fitness for purpose. The primary purpose of providing this free-of-charge download is to allow the software to be tested by a wider audience and gain additional feedback; comments on the software are encouraged and should be addressed to adam.floyd@buckingham.ac.uk.

Two versions of EDNA are available:

  1. a compressed installer package for the complete executable program containing both the normalisation library and user interface. You need to extract this and run EDNAsetup.msi to install the program.
  2. a library containing only the normalisation and data storage functionality with no user interface, for integration with your own projects.

Security / Privacy, Wireless Communication

Alan Anwer Abdulla
Securing Data Transmission over Wireless Networks using Steganography in Digital Image streams
Area of Study: Security / Privacy, Wireless Communication
Award: DPhil Computing, 2016
Supervisors: Professor Sabah Jassim, Dr Naseer Al-Jawad

This project aims to investigate the use of Steganography in digital images to secure transactions over a wireless networks. Securing sensitive transactions over wireless connection using traditional cryptography is not a viable solution, while information hiding seems to provide a viable alternative. The investigations include assessing the security of various information hiding techniques, both in spatial and frequency domains. We shall develop robust steganographic techniques to hide sensitive information in multi-media objects. We shall also investigate the type of sensitive objects to be transmitted, feature extraction techniques that provide secure environment for hiding information, and techniques to raise the capacity of hiding).

Read thesis


Kotobhar David
Award: MSc Applied Computing, 2016
Supervisors: Dr Torben Kuseler, Dr Naseer Al-Jawad

Ahmed Alnajar
Award: MSc
Supervisors: Dr Naseer Al-Jawad, Dr Harin Sellahewa

Nazar Al-Hayani
Award: DPhil Computing, 2016
Supervisors: Professor Sabah Jassim, Dr Naseer Al-Jawad

Read abstract: Nazar Al-Hayani on secure video transmission over open and wireless network channels

Inebi Douglas
Award: MSc Applied Computing, 2015
Supervisor: Dr Harin Sellahewa

Uchenna Onukwue
Investigating the Effects of Wavelet Filters on a Mean-Reversion Strategy
Award: MSc Applied Computing, 2015
Supervisor: Mr Hongbo Du

Read abstract: Uchenna Onukwue