1. Influence Extraction from Social Networks
Today’s social networks are widely used to share or exchange information. As a result, huge amounts of data arise, showing people’s interests, hot topics, newest trends, etc. It has been an interesting challenge to mine knowledge from such network data in a principled manner. This project aims at analyzing large-scale social network data and identifying influential users or contents using visualizations and graph analytics algorithms, including PageRank, community identification, etc.
- Develop efficient implementations of graph algorithms with the state-of-the-art graph analytics frameworks.
- Demo your results and insights with data visualization tools.
2. Query processing over large Knowledge bases
The objective of this project is to efficiently answer queries over large knowledge bases such as freebase using RDF/SPARQL. Query workload and evaluation are the key. A SPARQL query workload needs to be designed, justified and constructed for benchmarking. The goal is benchmarking and performance evaluation of different types of SPARQL queries over large KBs such as Freebase over RDF/graph databases such as JENA, Neon4j and others.
- You can replace KB such as freebase with an RDF store with chemical compounds structures. Queries can be returning all chemical compounds that contains three-membered carbon ring (cyclopropane).
3. Scalable Image Retrieval Systems and Applications
In this project, we look into the content-based image retrieval problem and aim to build a image retrieval system. Similar to search engines, the queries to image retrieval system is an image and the results should be a ranked list of similar images in the image database. The Oxford buildings dataset can used to assess the search quality.
- Use images in online shopping dataset.
- Use images in twitter dataset.
- Extend retrieval systems for video data.
4. Knowledge Base Construction from Text
Recent knowledge bases like Freebase, YAGO are being increasingly helpful to understand human information and queries. In this project, we build a knowledge base using natural language processing (NLP) and information extraction (IE) approaches. We aim at constructing and enhancing a structured knowledge base from natural text in a format that machines can process and use to answer human queries.
- Use one of the state-of-the-art tools to extract knowledge from large text corpus.
- Show how the knowledge base can be used to answer user queries. Example queries include key word search, natural language question, visualization, etc.
- Enhance an existing knowledge base by extracting new knowledge from text corpus. Try to use the input knowledge base as background evidence for the extraction algorithms.
- Open Information Extraction: the Second Generation
- Toward an Architecture for Never-Ending Language Learning
- Reading The Web with Learned Syntactic-Semantic Inference Rules
- Constructing an Interactive Natural Language Interface for Relational Database
5. Database Support for Large-scale Fast Visualizations
Most visualization software and services such as Tableau are supported by back-end scalable database systems to do the heavy-lifting of data processing and computation. The goal of this project is to look at current literature on database support for large-scale fast visualization and develop a base to implement and develop new query processing techniques and optimizations for visualization applications.
Tools: Tableau, D3, Prefuse