MOOCLink: linking and maintaining qulity of data provided by various MOOC providers

154800-Thumbnail Image.png
Description
The concept of Linked Data is gaining widespread popularity and importance. The method of publishing and linking structured data on the web is called Linked Data. Emergence of Linked Data has made it possible to make sense of huge data,

The concept of Linked Data is gaining widespread popularity and importance. The method of publishing and linking structured data on the web is called Linked Data. Emergence of Linked Data has made it possible to make sense of huge data, which is scattered all over the web, and link multiple heterogeneous sources. This leads to the challenge of maintaining the quality of Linked Data, i.e., ensuring outdated data is removed and new data is included. The focus of this thesis is devising strategies to effectively integrate data from multiple sources, publish it as Linked Data, and maintain the quality of Linked Data. The domain used in the study is online education. With so many online courses offered by Massive Open Online Courses (MOOC), it is becoming increasingly difficult for an end user to gauge which course best fits his/her needs.

Users are spoilt for choices. It would be very helpful for them to make a choice if there is a single place where they can visually compare the offerings of various MOOC providers for the course they are interested in. Previous work has been done in this area through the MOOCLink project that involved integrating data from Coursera, EdX, and Udacity and generation of linked data, i.e. Resource Description Framework (RDF) triples.

The research objective of this thesis is to determine a methodology by which the quality

of data available through the MOOCLink application is maintained, as there are lots of new courses being constantly added and old courses being removed by data providers. This thesis presents the integration of data from various MOOC providers and algorithms for incrementally updating linked data to maintain their quality and compare it against a naïve approach in order to constantly keep the users engaged with up-to-date data. A master threshold value was determined through experiments and analysis that quantifies one algorithm being better than the other in terms of time efficiency. An evaluation of the tool shows the effectiveness of the algorithms presented in this thesis.
Date Created
2016
Agent

An adaptable iOS mobile application for mobile data collection

154372-Thumbnail Image.png
Description
Mobile data collection (MDC) applications have been growing in the last decade

especially in the field of education and research. Although many MDC applications are

available, almost all of them are tailor-made for a very specific task in a very specific

field (i.e.

Mobile data collection (MDC) applications have been growing in the last decade

especially in the field of education and research. Although many MDC applications are

available, almost all of them are tailor-made for a very specific task in a very specific

field (i.e. health, traffic, weather forecasts, …etc.). Since the main users of these apps are

researchers, physicians or generally data collectors, it can be extremely challenging for

them to make adjustments or modifications to these applications given that they have

limited or no technical background in coding. Another common issue with MDC

applications is that its functionalities are limited only to data collection and storing. Other

functionalities such as data visualizations, data sharing, data synchronization and/or data updating are rarely found in MDC apps.

This thesis tries to solve the problems mentioned above by adding the following

two enhancements: (a) the ability for data collectors to customize their own applications

based on the project they’re working on, (b) and introducing new tools that would help

manage the collected data. This will be achieved by creating a Java standalone

application where data collectors can use to design their own mobile apps in a userfriendly Graphical User Interface (GUI). Once the app has been completely designed

using the Java tool, a new iOS mobile application would be automatically generated

based on the user’s input. By using this tool, researchers now are able to create mobile

applications that are completely tailored to their needs, in addition to enjoying new

features such as visualize and analyze data, synchronize data to the remote database,

share data with other data collectors and update existing data.
Date Created
2016
Agent

Cognitive software complexity analysis

154330-Thumbnail Image.png
Description
A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented.

A well-defined Software Complexity Theory which captures the Cognitive means of algorithmic information comprehension is needed in the domain of cognitive informatics & computing. The existing complexity heuristics are vague and empirical. Industrial software is a combination of algorithms implemented. However, it would be wrong to conclude that algorithmic space and time complexity is software complexity. An algorithm with multiple lines of pseudocode might sometimes be simpler to understand that the one with fewer lines. So, it is crucial to determine the Algorithmic Understandability for an algorithm, in order to better understand Software Complexity. This work deals with understanding Software Complexity from a cognitive angle. Also, it is vital to compute the effect of reducing cognitive complexity. The work aims to prove three important statements. The first being, that, while algorithmic complexity is a part of software complexity, software complexity does not solely and entirely mean algorithmic Complexity. Second, the work intends to bring to light the importance of cognitive understandability of algorithms. Third, is about the impact, reducing Cognitive Complexity, would have on Software Design and Development.
Date Created
2016
Agent

Distributed SPARQL over big RDF data: a comparative analysis using Presto and MapReduce

153213-Thumbnail Image.png
Description
The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as

The processing of large volumes of RDF data require an efficient storage and query processing engine that can scale well with the volume of data. The initial attempts to address this issue focused on optimizing native RDF stores as well as conventional relational databases management systems. But as the volume of RDF data grew to exponential proportions, the limitations of these systems became apparent and researchers began to focus on using big data analysis tools, most notably Hadoop, to process RDF data. Various studies and benchmarks that evaluate these tools for RDF data processing have been published. In the past two and half years, however, heavy users of big data systems, like Facebook, noted limitations with the query performance of these big data systems and began to develop new distributed query engines for big data that do not rely on map-reduce. Facebook's Presto is one such example.

This thesis deals with evaluating the performance of Presto in processing big RDF data against Apache Hive. A comparative analysis was also conducted against 4store, a native RDF store. To evaluate the performance Presto for big RDF data processing, a map-reduce program and a compiler, based on Flex and Bison, were implemented. The map-reduce program loads RDF data into HDFS while the compiler translates SPARQL queries into a subset of SQL that Presto (and Hive) can understand. The evaluation was done on four and eight node Linux clusters installed on Microsoft Windows Azure platform with RDF datasets of size 10, 20, and 30 million triples. The results of the experiment show that Presto has a much higher performance than Hive can be used to process big RDF data. The thesis also proposes an architecture based on Presto, Presto-RDF, that can be used to process big RDF data.
Date Created
2014
Agent

Dependency analysis in the HTML5, JavaScript and CSS3 Stack

152796-Thumbnail Image.png
Description
The Internet is transforming its look, in a short span of time we have come very far from black and white web forms with plain buttons to responsive, colorful and appealing user interface elements. With the sudden rise in demand

The Internet is transforming its look, in a short span of time we have come very far from black and white web forms with plain buttons to responsive, colorful and appealing user interface elements. With the sudden rise in demand of web applications, developers are making full use of the power of HTML5, JavaScript and CSS3 to cater to their users on various platforms. There was never a need of classifying the ways in which these languages can be interconnected to each other as the size of the front end code base was relatively small and did not involve critical business logic. This thesis focuses on listing and defining all dependencies between HTML5, JavaScript and CSS3 that will help developers better understand the interconnections within these languages. We also explore the present techniques available to a developer to make his code free of dependency related defects. We build a prototype tool, HJCDepend, based on our model, which aims at helping developers discover and remove defects early in the development cycle.
Date Created
2014
Agent

A systematic approach to generate the security requirements for the smart home system

151948-Thumbnail Image.png
Description
Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's

Smart home system (SHS) is a kind of information system aiming at realizing home automation. The SHS can connect with almost any kind of electronic/electric device used in a home so that they can be controlled and monitored centrally. Today's technology also allows the home owners to control and monitor the SHS installed in their homes remotely. This is typically realized by giving the SHS network access ability. Although the SHS's network access ability brings a lot of conveniences to the home owners, it also makes the SHS facing more security threats than ever before. As a result, when designing a SHS, the security threats it might face should be given careful considerations. System security threats can be solved properly by understanding them and knowing the parts in the system that should be protected against them first. This leads to the idea of solving the security threats a SHS might face from the requirements engineering level. Following this idea, this paper proposes a systematic approach to generate the security requirements specifications for the SHS. It can be viewed as the first step toward the complete SHS security requirements engineering process.
Date Created
2013
Agent

A domain-specific approach to verification & validation of software requirements

150509-Thumbnail Image.png
Description
Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving

Gathering and managing software requirements, known as Requirement Engineering (RE), is a significant and basic step during the Software Development Life Cycle (SDLC). Any error or defect during the RE step will propagate to further steps of SDLC and resolving it will be more costly than any defect in other steps. In order to produce better quality software, the requirements have to be free of any defects. Verification and Validation (V&V;) of requirements are performed to improve their quality, by performing the V&V; process on the Software Requirement Specification (SRS) document. V&V; of the software requirements focused to a specific domain helps in improving quality. A large database of software requirements from software projects of different domains is created. Software requirements from commercial applications are focus of this project; other domains embedded, mobile, E-commerce, etc. can be the focus of future efforts. The V&V; is done to inspect the requirements and improve the quality. Inspections are done to detect defects in the requirements and three approaches for inspection of software requirements are discussed; ad-hoc techniques, checklists, and scenario-based techniques. A more systematic domain-specific technique is presented for performing V&V; of requirements.
Date Created
2012
Agent