Developing the data story on Nepal's damage and reconstruction

Developing the data story on Nepal's damage and reconstruction

Tags
Blog
Authors
Sabine Loos, Karen Barns, Arogya Koirala
Published
July 15, 2019
Citation
Research Project
Presentations
Graphics
Related to Projects (Publications)

This post was originally posted on my old website (https://sabine-loos.squarespace.com/blog-1/afterquake-visrisk), but I've migrated it below

image

by: Sabine Loos, Karen Barns, and Arogya Koirala

This year the Labs team of the Global Facility for Disaster Reduction and Recovery (GFDRR), Mapbox and the Data Visualization Society announced a competition to visualize map data to as a way to communicate risk information: #VizRisk2019. Since this is a combo of risk + mapping + visualization, this was right up our alley, so we made a submission.

So let’s get straight to it, we wanted to focus our story on the 2015 Nepal earthquake because 1) Arogya is Nepalese and his parent organization, Kathmandu Living Labs, has been working directly with the Nepali government in collecting data for its reconstruction efforts, and 2) Sabine’s research for the 2017 call for proposals by the World Bank’s Development Data Group and the Global Partnership for Sustainable Development Data is focused on Supporting Equitable Recovery in Nepal.

Rather than get into the details of the earthquake here, head to our final story at http://afterthequake.surge.sh/.

Our data story process

We can break down our process into a few key steps:

  1. Brainstorming the story
  2. Data exploration
  3. Rinse & Repeat 1-2 until story is polished
  4. Develop website
  5. Prep map data and images for website
  6. Put 5 into 4 and we’re done.

1. Brainstorming the story

Half the battle was coming up with a story that we wanted to tell that had enough supporting data. Sabine and Karen brainstormed 5-10 possible stories, then narrowed it down to two that were actually worth pursuing. Deciding between these two required us to storyboard them fully and we ultimately chose to highlight the damage and reconstruction progress because of the availability of data.

Storyboarding our possible ideas (we ultimately didn’t choose this story on compounded vulnerability, but still think it’s cool and worth pursuing!)
Storyboarding our possible ideas (we ultimately didn’t choose this story on compounded vulnerability, but still think it’s cool and worth pursuing!)

2. Data exploration

Then, we had to first find the data we needed and explore what we were actually working with. The Nepal earthquake is an ideal disaster to highlight because of a ton of data was produced, collected, and made openly available after the event. We also wanted to include this publicly available data in our story to increase awareness about the open data initiatives that Nepal is taking to support the study of resilience.

image
image

The datasets we use here include:

  • Household impact assessment (including building damage): Nepal carried out a massive household assessment of impacts to identify beneficiaries for their reconstruction program. Through a collaborative effort between Kathmandu Living Labs and the National Planning Commision of Nepal data was made open and publicly accessible since 2017 in the 2015 Nepal Earthquake Open Data Portal.
  • Ground shaking: USGS creates a Shakemap for all earthquakes that shows the shaking experienced in the region around the epicenter.
  • Landslide map: About 25,000 landslides were triggered from the earthquake, all of which were mapped using satellite imagery and made openly available here
  • Reconstruction progress: The Earthquake Housing Reconstruction Registration Program has tracked reconstruction progress. We extracted the data specific to complete reconstruction for the 11 districts we explore.

3. Rinse & repeat 1-2 until story is polished

Even though we understood a fair amount of the data, we still made some exploratory plots in R, like scatterplot matrices and timelines to see what we were working with. We compared a lot of different maps and their correlations (see correlation matrix) to spin up a story and ended up keeping it simple and sweet with a story on the reconstruction process.

4. Develop website

Meanwhile, Arogya quickly spun up a website framework that we could use to start laying out our ideas. We used a number of tools to pull this together:

  • The entire application is built on top of React, and bootstrapped using the “create-react-app” library.
  • For the slide-style layout, we used Mike Bostock’s Stack.js library. Although a bit dated, we found it to fit our needs perfectly given the little amount of time we had in our hands.
  • For all of our maps, we used the Mapbox Platform (Mapbox Studio to design the basemap and upload relevant tilesets, and Mapbox GL JS to render the maps in the front-end). We made extensive use of the Tileset upload feature provided by Mapbox, all of the geographic data in the product has been uploaded as a tileset, and loaded into the mapbox as a new source.
  • Initially, the geographic data-size was too large for the map to load on time. To tackle this, we made extensive use of Mapshaper and its simplification features.

5. Prep map data and images for website

Data prep in R

To get the data and images ready for our website, we prepped it using multiple tools including:

  • Most of our data analysis was done in R. A few of our favorite packages include ggplot for figures, raster and sp for spatial data, and extrafont (Karen had never thought about importing new fonts into R before this dataviz competition!).
  • When datasets were quite large, we worked in ArcGIS and mapshaper.org to speed up the process of clipping and cleaning, respectively
  • To test if our datasets looked right we used geojson.io to quickly visualize the data before importing into Mapbox

Choosing color palettes

Another thing we learned about were all the ways we could make good color palettes. Some of our favorites were:

Adding final touches

Finally, we wanted to add some final graphics that were more relatable, so Sabine hand drew images on her tablet.

image
image

6. Put 5 into 4 and we’re done.

Arogya handled getting all the figures that Sabine and Karen put together into the website. This involved taking geojson’s exported from R (with HEX values for the colors listed as an attribute) and uploading them as tilesets into mapbox. Here’s an example of how that visualization changed between R (on the left) and the final story (on the right)

image

image
image

image

Useful code snippets

Data prep in R

Here’s some example code where we simplify a raster dataset for the shaking intensity into a prettier polygon (see above for final map of shaking intensity):

# cut down MMI to nepal
MMI_rast <- mask(MMI_rast, nepal_boundary)
plot(MMI_rast)
# cut raster into 10 break points
MMI_rast_cut <- cut(MMI_rast, breaks= 10)
# turn raster into polygons
MMI_shp <- rasterToPolygons(MMI_rast_cut, dissolve=T)
# smooth out the polygons
MMI_shp <- smooth(MMI_shp, method = "ksmooth", smoothness = 7)
# fill the random small holes (with area less than 500 m2)
MMI_shp <- smoothr::fill_holes(MMI_shp, units::set_units(500, m^2))

Web development in javascript

Here’s how we got the shaking intensity and made it flash on and off in our final story:

const animate = timestamp => {
		const maxRadius = 120;
		const minRadius = 5;
		const minOpacity = 0.1;

		map.setPaintProperty(
			"point",
			"circle-radius",
			Math.abs(Math.sin(timestamp / 600) / 2) * maxRadius +
				minRadius
		);
		map.setPaintProperty(
			"point",
			"circle-opacity",
			Math.abs(Math.cos(timestamp / 600) / 2) + minOpacity
		);
		map.setPaintProperty(
			"shaking-intensity",
			"fill-opacity",
			Math.abs(Math.sin(timestamp / 600)) * 1
);

Final thoughts

We had a lot of fun pulling this together but there are definitely some things we would change next time around.

  1. Start collecting data and exploring early. We could have possibly pursued another idea if we had spent more time earlier in the challenge looking for other datasets.
  2. Design your design team. We ended up splitting up tasks according to our strengths: Sabine with spatial data manipulation and background on the story, Karen with figure development and providing practical perspectives, and Arogya with web development and keeping us on track :)
  3. No stress. We all definitely felt the push towards the end of the challenge. But we weren’t putting pressure on ourselves which made the entire experience more enjoyable
  4. It’s never perfect. Progress is incremental, and there is still so much we can do to improve. Even after putting in so many hours, we’ve only just scratched the surface of what we want to tell in the future.

Our team

image

Sabine is a PhD researcher with the Stanford Urban Resilience Initiative who models the impacts of disasters using geospatial data. The vizrisk challenge combines her fascinations with maps and storytelling with risk and data visualization + she could collaborate with friends Karen and Arogya once again!

image

Arogya is the Tech and Innovation Lead at Kathmandu Living Labs, a leading civic-tech organization from Nepal and the region. What drew him to the challenge was a) the theme, which is very close to his work back home, and the b) challenging nature of the collaboration, which involved working in close coordination with team members who are half way across the globe. So exciting!

image

Karen is a risk and resilience consultant at Arup, based in San Francisco. She took part in the competition because she wanted to experiment with new tools and ways to visualize data, but mostly she just wanted an excuse to work with Sabine again!

image