我来采购通过软件开发er for supply chain and what is touted as big data isn’t. What we can do much better is use existing data better and exploit Moore’s Law to increase our current analysis capability. What I find very hard to believe is that procurement needs Hadoop clusters.
And please don’t think I was trivialising TradeExtensions. It’s a very neat application of optimisation and from what little I have seen, very user friendly.
I find myself agreeing with Peter here, and Squirrel makes some good points too.We should be correct in the use of the term “big data”. The original scientific meaning of the term “Big Data” is “More Data Than We Can Store”. Examples of such data is streaming data from web cameras. If we can’t store the data itself, we have to do things like process data as streams, compress it, calculate statistics etc, on the fly.. However, in the business world, “Big Data” has been deflated to mean just “A Lot of Data”. And if we talk about “A Lot of Data”, we can start to do things such as Data Mining, using technologies such as column databases etc. A normal sourcing optimization problem, even if involving millions of bids in a highly complex supply chain and zillions of potential combination, is NOT “Big Data”.But, analyzing a set of transactions from a large organization is “a Lot of Data”.
Big data can be truly revolutionary. By giving us the right answers to the wrong questions with speed and specious precision we will be able to be wronger in a deeper and more delusionary way than ever before.
I think it depends on the procurement activity. Buying widgets may not need Big Data to accomplish. Commissioning social care in a local authority is a different matter, especially when it comes to defining your requirements. Big Data can help you understand the clients in a depth never before achievable – not just their location and age, but how healthy their lifestyles are, their sleeping patterns, how digitally savvy they are. All this can help you commission a service that truly meets their needs.
Of course, this is a data protection nightmare, but that’s another issue.
There is no ‘big data’ in the vast majority of procurement applications. There’s simply not enough transactions to record. I can’t think of a single application that would require a Hadoop cluster or similar. The closest might be an Amazon using it generate orders based on customer transactions but that would have been achievable many years ago using standard supply chain planning tools.
You can argue that procurement could do with better predictive analytics. But most of that can be done through better application of current supply chain information (and having access to tier n data).
Even applications like TradeExtensions aren’t much more than a neat use of pretty simple minimisation algorithms.
So yes, big data is snake oil.
It however doesn’t absolve us as practitioners. We don’t use existing data to anything like it’s true potential (how much SCOR data does your organisation have and how deep do you measure it?)
Agreed, Garry.
我来采购通过软件开发er for supply chain and what is touted as big data isn’t. What we can do much better is use existing data better and exploit Moore’s Law to increase our current analysis capability. What I find very hard to believe is that procurement needs Hadoop clusters.
And please don’t think I was trivialising TradeExtensions. It’s a very neat application of optimisation and from what little I have seen, very user friendly.
I find myself agreeing with Peter here, and Squirrel makes some good points too.We should be correct in the use of the term “big data”.
The original scientific meaning of the term “Big Data” is “More Data Than We Can Store”. Examples of such data is streaming data from web cameras. If we can’t store the data itself, we have to do things like process data as streams, compress it, calculate statistics etc, on the fly..
However, in the business world, “Big Data” has been deflated to mean just “A Lot of Data”. And if we talk about “A Lot of Data”, we can start to do things such as Data Mining, using technologies such as column databases etc.
A normal sourcing optimization problem, even if involving millions of bids in a highly complex supply chain and zillions of potential combination, is NOT “Big Data”.But, analyzing a set of transactions from a large organization is “a Lot of Data”.
Big data can be truly revolutionary. By giving us the right answers to the wrong questions with speed and specious precision we will be able to be wronger in a deeper and more delusionary way than ever before.
I think it depends on the procurement activity. Buying widgets may not need Big Data to accomplish. Commissioning social care in a local authority is a different matter, especially when it comes to defining your requirements. Big Data can help you understand the clients in a depth never before achievable – not just their location and age, but how healthy their lifestyles are, their sleeping patterns, how digitally savvy they are. All this can help you commission a service that truly meets their needs.
Of course, this is a data protection nightmare, but that’s another issue.
There is no ‘big data’ in the vast majority of procurement applications. There’s simply not enough transactions to record. I can’t think of a single application that would require a Hadoop cluster or similar. The closest might be an Amazon using it generate orders based on customer transactions but that would have been achievable many years ago using standard supply chain planning tools.
You can argue that procurement could do with better predictive analytics. But most of that can be done through better application of current supply chain information (and having access to tier n data).
Even applications like TradeExtensions aren’t much more than a neat use of pretty simple minimisation algorithms.
So yes, big data is snake oil.
It however doesn’t absolve us as practitioners. We don’t use existing data to anything like it’s true potential (how much SCOR data does your organisation have and how deep do you measure it?)