Given the nature of the global Covid-19 pandemic, Deep Berlin, one of the capital’s leading AI communities, slightly modified the concept for their second ‘AI for Good Hackathon’ in April to a remote and fully digital event: After last year’s event revolved around sea rescue and technological ways to support the location of castaways in the Mediterranean, participants of last month’s hackathon were tasked with using machine learning and computer vision to tackle challenges around forest fires and air quality.
The impact of climate change and ways to tackle its consequences
In various ways – some more visible than others – our planet is suffering the consequences of climate change: The devastating bush fires in Australia have killed more than 30 people and over half a billion animals, scorched more than 120.000 km² of land and destroyed 2600 homes in its wake. As temperature rises steadily, the risk of wildfires like these may become more common around the world.
There is an evident urgent need to address the effects of climate change, and researchers have identified advances in technologies such as artificial intelligence and machine learning to be a driving force to prevent disasters and to help preserve our planet and save lives: Satellite imagery and remote sensor data present an opportunity to develop machine learning models that allow to not only predict potential threats and plan accordingly, but also to implement sustainable solutions for the future. By using computer vision and image analysis for instance, robust predictions can be made about forest areas, areas and their risk potential within the area. By modeling predictive actions for fire resilience, experts can be able to provide immediate relief in case of disasters and in the best case scenario, prevent them altogether.
Hackathon teams deliver promising approaches
Split into a number of cross-functional teams, participants of the AI for Good Hackathon used their collective expertise to find clever ways to build machine learning models to tackle one or more challenges provided by event initiators – from categorization of trees, carbon density to fire resilience. We have selected the most promising three approaches:
Team 1 worked with satellite imagery to identify areas experiencing stark deforestation and used machine learning to track the mass and density of forest area over time in a specific region of Brazil. With their model the team was able to make qualified predictions of the deforestation rate, i.e. how much of the forest is lost per year in square kilometres as well as the rate of change. They also drew on radar images and tried to apply an algorithm that would predict and visualize the changing forest areas.
Resources Team 1:
Team 2 worked with the GEDI LIDAR data, which is basically a LIDAR mounted to the International Space Station (ISS) that can be used to measure the height of objects on Earth – in this case the height of trees. The team used available datasets from Brazilian forests and matched them with a NASA dataset that covers fires globally to uncover potential correlations.
In a next step, they looked at combining data about air quality and geological information of certain areas where devastating forest fires occurred. With it, data models based on various parameters could then predict how likely it is that there will be a fire in a given area soon.
Resources Team 2:
Team 3 focused on analyzing the impact of tourism on forest fires. They wondered if wildfires are more likely to start in places with a lot of tourism and therefore analyzed datasets that provided information about location of wildfires as well as human activity (tourism and non-tourism locations) for the region of Galicia, Spain. With machine learning models these two aspects – the development of tourism and non-tourism areas as well as the locations of wildfires – were then compared over the course of 10 years. While they found some potential associations, their work also inspired many more questions about potential factors to analyze in the future.