There’s a limited amount of data and metrics surrounding the way we produce, supply, and consume food. Unfortunately, much of this information is fragmented. Until recently it’s been impossible to bring that data together in a meaningful way.
We set up Agrimetrics to help address challenges in the food system using new technology and data to remedy the situation. We highlight how a range of technologies can tackle complexity in the food system and make it more resilient.
Untangling a complex system
While it’s clear that technology is powerful – the challenge is creating effective business models that support solving these problems.
I get excited about the idea that sharing and connecting data can yield insights that would not be possible without the latest technologies. However, it requires a connection to data that is hard to build in a sustainable way. And quite often, people don’t share data without a value exchange. Also, organisations have to weigh the risk vs. value equation in sharing that data – because we all know that data can be misused.
In my previous role as Director of the Centre for Food Security, one of the emerging themes was the complexity of the food system. The lack of sharing data has made the food system inherently unpredictable and vulnerable.
The food system is dependent on factors beyond human control like the weather. Add on to that the fact that food is at the heart of human existence, and issues in the system can have far reaching consequences. The food system is global. It’s not inconceivable that a drought in one part of the world could impact food shortages elsewhere.
Serendipitously, I came upon the opportunity to bid for funding to create a Centre for Agri-Informatics and Metrics of Sustainable Intensification. I jumped at the chance. A colleague and I started to discuss how we could use data to reduce complexity. In particular, we wanted to tackle the challenge of reconnecting the farmed, natural, and human ecosystems. These have tended to be managed independently of one another.
These are of course one ecosystem. By building close connections between the ecosystem and its digital representation through data, we can fix this disconnect.
Innovating the industry by combining science and technology
Agrimetrics is a company that is truly at the intersection of science and technology companies. This relationship has always existed, but now it is going beyond the core technical aspects into creating something bigger. We will discuss what this means in practice, and some of the technologies that have changed the way we can build on data.
We are essentially the food and farming sector’s Data Marketplace: a place to find, manage and monetise agri-food data. Our mission is to accelerate the sector’s ability to maximise the value of its data. We want to see a sector where the sharing of data powers the next generation of innovation.
Connecting fragmented data
Making the most of the data in agriculture is harder than in many industries because of its fragmentation. At a much more practical level, we are using technology to provide an infrastructure that supports an agri-food data marketplace.
The key requirements for a data marketplace include:
- Interoperability of data
- Connected data
- Data originators need control over their data
- Value needs to be exchanged
- Information needs to be symmetric. Users need to understand the data they are accessing and providers need to know how their data is being used.
All of these requirements bring technical challenges, including the need for detailed permissioning below the level of the data set. The most interesting to me however is interoperability. Many are tempted to think that standardisation is the route to interoperability. However, this imposes rigidity on the data model when there is a range of data in agriculture.
Agricultural data includes numerical data like prices and yields. It also includes things like plant names, which might be in Latin or a local language. There is also the human challenge. Some data-standards that are different in one area compared to another. Persuading one community of users to abandon their cherished standard in favour of another is likely to be problematic.
We are adopting an alternative. By using semantic data models we are providing comprehensive, machine readable descriptions of the entities which the data is intended to represent as well as the relationships between them. The value of this approach is that it begins with a fundamental description of the world which can be made real in many different ways.
The description begins with the general case, for example: a cow produces milk, humans drink milk, and ends at the specific: ermintrude is a cow; ermintrude produced 5000 litres of milk last year; Susan is a human; Susan drank a pint of milk yesterday.
The value of this approach is that it begins with a fundamental description of the world which can be made real in many different ways. For example the cow entity can be specified in any language or it can be made to represent a specific cow by using data. This data can be supplied in any form as the example illustrates with differing units used to measure volumes of milk.
Leveraging machine learning and the power of Azure
To build our data marketplace, we realised we needed a sophisticated and connected dataset. We created a knowledge graph with rich semantics to make data interoperable. A knowledge graph uses machine learning to provide structure and create smart relations throughout the dataset.
Graph databases are quite challenging to make performant and useable. They’re built with SPARQL, which isn’t a friendly language and a small mistake can easily bring down a database. This is why you need to combine a range of technologies to tackle any problems. In our case, we used elastic to allow for rapid querying, SQL to store numerical data, and an API called graphDB to take care of the semantic data.
Azure allows us to scale as our data grows. Most importantly, Azure is built with security-by-design to help us keep the data safe. It even uses machine learning and AI to stay ahead of modern security threats and keep its cybersecurity intelligence up to date
Helping create sustainable food production
The volumes of data that are being created today are transforming the way we do science. With plentiful data, we can have much larger samples to produce more actionable insights for the food system.
As we have highlighted above however, making the most of the data in agriculture is harder than in many industries because of its fragmentation. Therefore, we have taken a more practical approach leveraging the power of Azure and machine learning to provide a robust infrastructure that supports the agri-food data marketplace.
By simplifying complex food systems through the use of the data, analytics, and AI we’re improving resilience and helping solve the global challenge of economically, ethically, and environmentally sustainable food production.
Find out more
About the author
Richard Tiffin is Agrimetrics’ Chief Scientific Officer and Professor of Applied Economics at the University of Reading.
Richard read Agriculture at the University of Newcastle and completed a PhD in Agricultural Economics at the University of London. He lectured in Agricultural Economics at both Newcastle and Durham before joining the University of Reading where he was appointed Professor in 2006.
Richard was previously Director of the Centre for Food Security, leading the University of Reading’s strategic research in the area of food security and fostering internal and external collaborations to meet the multidisciplinary food security agenda. His research, which is focused on diet and health policy, has examined the impacts of alternative food policies on land use in the UK and the impacts of both a soft drink tax and a ‘fat tax’ on health in the UK.
Richard’s research group is currently developing an empirical framework to better understand the cognitive underpinnings of dietary choice.
Kate is the Head of Azure Cloud Solution Architecture for Media, Telco and Professional Services at Microsoft UK, working with customers to architect end to end solutions, using Microsoft cloud technologies, with an emphasis on creating solutions that leverage data by using AI.
A behavioural neurobiologist by training, she is passionate about the intersection between technology and business, and how new technologies can shape organisations as they evolve.
In her earlier role at Microsoft, she led the Data and AI Cloud Solution Architecture team for Financial Services. Under her leadership, the team helped organisations shape their data strategies in a scalable and responsible way.
Prior to Microsoft, Kate worked at a start-up that used Big Data to predict commodity flows for Financial Services Institutions, focussing on data fusion, macro-economics, and behavioural analysis. She also holds an MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.