NO.082 Big Data: Challenges and Opportunities for Disaster Recovery
March 28 - 31, 2016 (Check-in: March 27, 2016 )
- Sanjay Madria
- Missouri University of Science and Technology, USA
- Takahiro Hara
- University of Osaka, Japan
- Cyrus Shahabi
- University of Southern California, USA
- Calton Pu
- Georgia Tech, USA
Description of the meeting
The main purpose of this NII Shonan meeting is to bring together researchers from the multidisciplinary fields of data management and analytics; , mobiles, sensors and pervasive computing; geography and urban-panning; and disaster response and recovery with public agencies and commercial entities towards using big data for better decision making and problem solving in the event of a disaster. To do so, we need to close the gaps between those who collect the data (data providers), those who could benefit from using the data (domain experts), and those who are capable of developing the methods for storing/managing/processing the data (technology enablers).
So-called “Big Data” began when the Enterprise era generated the first wave of data through various software applications such as inventory management or human resource applications. Soon the field of computer science realized that there were commonalities in how the data was being stored and accessed, which led to the development of databases. As the size of data grew due to broad adoption by many enterprises (Volume), new research fields emerged to deal with efficient access (parallel databases), integration (data warehouses) and analysis (data mining) of large datasets. However, the second wave of data, Human-generated data (the Web), exposed the fundamental challenges resulting from data heterogeneity (Variety); this data is semi-structured (text documents) or non-structured (pictures and videos) and is growing at a much higher rate. The rapid growth of web applications left academics with little opportunity to identify commonalities of data usage, leading to many independent tools that focus on a narrow aspect of data preparation for a given application type and requiring human in-the-loop data extraction and preparation. This worked to some extent, as human data creation processes led to a natural gap between data generation and data consumption. Machine generated data represents the newest wave as they are generated continuously at a high rate (Velocity) from various sensors in the physical world, starting with sensor instrumentation, e.g., pavement traffic loop detectors, SCADA industrial automation sensors, CCTV cameras, satellite- or plane-based LIDAR sensors, and continuing with inexpensive sensors in our mobile phones, refrigerators, watches, and soon, everything we wear. These three waves of data gave rise to numerous approaches benefiting from data use in critical decision-making (Big Data).
The time is ripe to embark on a fundamental approach to Big Data challenges by assembling stakeholders to review case studies, design and develop several prototypical end-to-end systems, identify the commonalities, and develop lessons learned stories. This is exactly the goal of our proposed meeting with a focus application of disaster response and recovery. This is because efficient and thorough data collection and its timely analysis are critical to any disaster response and recovery system in order to save people’s lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are major data science challenges. Moreover, the effective presentation of the collected knowledge to human decision-makers is an open problem. Therefore, the proposed meeting is to study and share experiences in Big Data research, Education and Training as well as discuss challenges and disseminate solutions, blueprints, and prototypes focusing on the disaster recovery application domain.
Towards this end, experts from various disciplines, including application domain experts with knowledge about disaster response and recovery, need to interact and collaborate effectively. The purpose of this workshop is to bring together these experts with the common goal of improving disaster recovery through Big Data, from different countries to Japan to initiate information exchange and collaboration. Moreover, the shortage of a knowledgeable workforce presents a further challenges to the Big Data management and needs to be addressed through education and training. Therefore, this workshop can provide a forum for education and training of researchers from multiple disciplines in the complementary areas.
The proposed meeting will address the following topics (the list is by no means exhaustive):
- Data Acquisition (e.g., remote sensing, aerial vehicles, infrastructure sensors and CCTV, mobile devices, wearable sensors, and online sources)
- Information Extraction and Representation
- Data Integration
- Data Analysis
- Information Presentation and Visualization
- Data Privacy, Trust, Quality, Integrity and Security
- Scalable Data Systems
- Data lifecycle management
- Citizen Science and Crowdsourcing
- Disaster Response and Recovery Applications