Posted - John Hartshorn: It's a rare event when I'm allowed to blog (this is the first time), so I'd best make good use of the liberty I've been granted. I'm Edinburgh-bound at the end of June, specifically for the INSPIRE conference, so it seemed like a good time to squeak some thoughts and open the floodgates to the online justice dispensed by those who know more than me. Nothing like the sound of the shallow jumping in at the deep, eh?
I'll be honest from the start – I've slunk around in the shadows of INSPIRE so far, desperately trying to avoid admitting that I can't quite get my head around what the problem seems to be, despite everyone talking about it. It all sounds very complicated.
The problem, as I see it, is that some of you out there are obliged (or at least to contribute to a data supply chain that is obliged) to take a whole load of data, check it, validate it, transform it across to a new schema and, finally, publish it in a new format. Of course, those who live and breathe INSPIRE will scream that I've simplified the problem a little. Don't be offended – isn't that really what the problem is, in layman's terms?
I thought we'd cracked it, to be honest. If you've ever had much to do with my colleagues here at 1Spatial, you'll know that they spend all of their time solving data problems - and they make it seem easy. They make sure that data can be trusted before it's used, propagated, or shared - to them, it's bread and butter stuff. It's the focus of our business and why we exist.
And that's probably why we were in a consortium project that demonstrated automated schema transformation network services to the JRC. You can read all about it here, but if you want an easier-going overview, check out the video we made.
So what treats does 1Spatial have for you at the Edinburgh conference?
My colleague Matt Beare is going to be presenting on Driving Government Efficiency with Improved Location-based Data - and that's what it's all about, right? Trusting the data which we are using to make critical decisions, and making sure that what is published and shared can be trusted, and demonstrably so. In our day-to-day jobs we are all accountable for our decisions, aren't we? If we trust our data by understanding it better, our decisions are surely better-informed and supported by evidence? This is even more the case if we can then improve data by consistently testing its conformance to strict business-rules. Better-informed decisions lead to more efficient use of resources and, ultimately, either lower our operational costs or improve the services we are providing to the public, our customers, etc.
Anyway, Matt is going to talk about a project he's just completed as part of ESDIN in which he and his team developed procedures and guidelines for data quality evaluation, edge matching across boundaries and data sets, and model generalisation. As part of this, he and the team developed a pilot service to show that automation of data quality assessment and improvement can bring real efficiencies, in this case, in the production of INSPIRE compliant data products.
So with us having successfully ticked off the two projects for JRC and ESDIN, we already have proven solutions to tackle some of the issues around INSPIRE. We can read the schema definitions, we can import data from multiple sources and formats, we can check its readiness and quality and we can then automate the whole process right up to the transformation of the data to the right schemas and the subsequent publishing of it in the right format. It's just another data problem like all the others - the commonality being in the automation and repeatable tasks of gathering up data, validating its readiness and quality, transforming it and publishing it. We already had the tools to do it - it really is just another data problem to solve. Which is what we do.
That's why I believe we've cracked it.
Anyway, I'll be in Edinburgh for at least some of the week of the conference and I do hope you get a chance to come along, hear about Matt's work and maybe even say hello to me. In the meantime to read more about our involvement with INSPIRE click here.
The critical question remains, though - will they let me blog again?
I'll be honest from the start – I've slunk around in the shadows of INSPIRE so far, desperately trying to avoid admitting that I can't quite get my head around what the problem seems to be, despite everyone talking about it. It all sounds very complicated.
The problem, as I see it, is that some of you out there are obliged (or at least to contribute to a data supply chain that is obliged) to take a whole load of data, check it, validate it, transform it across to a new schema and, finally, publish it in a new format. Of course, those who live and breathe INSPIRE will scream that I've simplified the problem a little. Don't be offended – isn't that really what the problem is, in layman's terms?
I thought we'd cracked it, to be honest. If you've ever had much to do with my colleagues here at 1Spatial, you'll know that they spend all of their time solving data problems - and they make it seem easy. They make sure that data can be trusted before it's used, propagated, or shared - to them, it's bread and butter stuff. It's the focus of our business and why we exist.
And that's probably why we were in a consortium project that demonstrated automated schema transformation network services to the JRC. You can read all about it here, but if you want an easier-going overview, check out the video we made.
So what treats does 1Spatial have for you at the Edinburgh conference?
My colleague Matt Beare is going to be presenting on Driving Government Efficiency with Improved Location-based Data - and that's what it's all about, right? Trusting the data which we are using to make critical decisions, and making sure that what is published and shared can be trusted, and demonstrably so. In our day-to-day jobs we are all accountable for our decisions, aren't we? If we trust our data by understanding it better, our decisions are surely better-informed and supported by evidence? This is even more the case if we can then improve data by consistently testing its conformance to strict business-rules. Better-informed decisions lead to more efficient use of resources and, ultimately, either lower our operational costs or improve the services we are providing to the public, our customers, etc.
Anyway, Matt is going to talk about a project he's just completed as part of ESDIN in which he and his team developed procedures and guidelines for data quality evaluation, edge matching across boundaries and data sets, and model generalisation. As part of this, he and the team developed a pilot service to show that automation of data quality assessment and improvement can bring real efficiencies, in this case, in the production of INSPIRE compliant data products.
So with us having successfully ticked off the two projects for JRC and ESDIN, we already have proven solutions to tackle some of the issues around INSPIRE. We can read the schema definitions, we can import data from multiple sources and formats, we can check its readiness and quality and we can then automate the whole process right up to the transformation of the data to the right schemas and the subsequent publishing of it in the right format. It's just another data problem like all the others - the commonality being in the automation and repeatable tasks of gathering up data, validating its readiness and quality, transforming it and publishing it. We already had the tools to do it - it really is just another data problem to solve. Which is what we do.
That's why I believe we've cracked it.
Anyway, I'll be in Edinburgh for at least some of the week of the conference and I do hope you get a chance to come along, hear about Matt's work and maybe even say hello to me. In the meantime to read more about our involvement with INSPIRE click here.
The critical question remains, though - will they let me blog again?
No comments:
Post a Comment