The world of Diversity + Inclusion gets a lot of well-deserved attention these days, but how do we effectively track, use, and trust D+I data? Join Great Data Minds and special guest Elaine Marino, Director of Diversity + Inclusion Strategy at Charles Schwab, as she reviews the critical importance of data in the world of Diversity and Inclusion.Read More
Digital transformation is disrupting every industry across the globe. In 2019 (and beyond) every company is in the software business, especially in the Business Analytics space. Agility is not an option, it is a business imperative. Enterprises must learn how to adapt and evolve quickly to increasingly rapid changes in technology capabilities, government regulations and economic conditions to avoid extinction. Companies that embrace this urgency to adapt quickly AND change the way they work will succeed. Thriving digital-age businesses continuously deliver high-quality, innovative and competitive data & analytics-enabled solutions to their customers (and internal stakeholders) in the shortest sustainable lead time. These are Lean Enterprises. Lean Enterprises get results:
· 10 – 50% happier, more motivated people
· 30 – 75% faster time-to-market
· 25 – 75% defect reduction
· 20 – 50% increase in productivity
The Scaled Agile Framework (SAFe®) for Lean Enterprises is the world’s leading framework for achieving enterprise agility. SAFe® for Lean Enterprises is a knowledge base of proven, integrated principles, practices and competencies for Lean, Agile and DataOps. SAFe® is perfect fit for Enterprise Data & Analytics Programs of any size.
Please join Mike Lampa, Great Data Minds Advisor, for this collaborative session (no slides here). Mike will present his case for the use of SAFe for data & analytic programs, the benefits and how to get started.
About Mike Lampa
Mike Lampa works with enterprises to transform their Analytics Programs to stay current and relevant with the latest business models, disciplines, regulations, techniques, analytic innovations and technologies in the rapidly evolving Modern Analytics market. Taking a holistic approach that focuses on the impacts on culture, people roles and business/technical process aspects as well as the selection and implementation of enabling modern technology tools and platforms and tactical migration roadmaps, Mike mentors and guides the enterprise towards a smooth transition to increase the Analytics Program value. Over 23 years Analytics practitioner experience both as a consultant and an employee in Global 100 enterprises with domain knowledge in Banking & Financial Services, Industrial Manufacturing, High-Tech Manufacturing, Telecommunications, Oil & Gas, Retail and Consumer Packaged Goods industries.Read More
As a leader in clinical quality and innovation for 20 years, DaVita is committed to implementing the latest technologies and developing patient-centric care models. As part of this effort, DaVita is utilizing artificial intelligence to identify opportunities to better serve their patients, improve clinical outcomes, and enhance operational best practices.Read More
Ever wonder if you’re making the right business decisions?
Every leader wants to succeed. To speed their climb to peak business and career performance, most invest massive amounts of research and resources into business intelligence, hoping to create better insights and make better decisions.
However, 70 to 80 percent of these initiatives fail, costing millions of dollars and months of time. The reason is simple – colorful visuals and complex dashboards can’t scale alongside massive amounts of new data.
In this white paper, specially created alongside our friends at Nodin.ai for Great Data Minds readers, you’ll learn the four, scalable paths to better business analytics, confident decisions, and improved performance.
READ THE WHITE PAPER HERE:Read More
The new world of data applications and the cloud architecture that hosts it is driven with containerization and the new evolution requires us to evolve in the configuration and execution of the applications. During this Innovation Download session, our partners at BACollaborative discuss the concepts behind containers, the architecture, the data journey, the cloud deployment, the orchestration and discuss why these are essential when deploying data programs.Read More
Watch GDM Advisor Joe Wilhelmy interview Jason Stangel of Slalom and Mark Mims of Google. This is a candid discussion on how to get started with Artificial Intelligence, along with best practices and practical insights for a successful AI program.Read More
It is generally accepted that successful businesses thrive by consistently making better decisions than their competitors, and the agriculture industry is no exception.
Through the application of artificial intelligence (AI) and machine learning (ML), growers can access increasingly sophisticated data and analytics tools, which enables better decisions, improved efficiencies, and reduced waste in food and biofuel production, all while minimizing negative environmental consequences.
Here is a list of innovative ways professionals in the agriculture industry are taking advantage of the powerful solutions now available through AI technologies.
“Digital farming brings increased precision to crop production by supporting … key farm management decisions with data-driven insights.”1
Crop production plays a critical role in the food and biofuel industries worldwide, and ML is radically improving the way farmers contribute on both fronts.
According to Object Computing Vice President, Machine Learning Solutions, Dr. Jason Bull, “Farmers make hundreds of complex and interconnected decisions every year that impact their risk, sustainability, and business returns.”
Using sensors in the field in concert with ML-enabled digital applications, farmers now have the means to predict harvest yields and evaluate crop quality, identify plant species, and detect crop disease and weed infestations in ways that were previously impossible.2
YIELD PREDICTION AND QUALITY ASSESSMENT
Through the application of ML technology, a farmer can log into a customized dashboard on a computer or tablet and access an accurate assessment of the harvestable versus non-harvestable acres on a given day. The weight and maturity of harvestable crops can also be measured and predicted.
Additionally, using a variety of technologies, including image analysis, crops can be evaluated both before and after harvest for the presence of desirable features, extent of damage (if applicable), nutritional makeup, and other factors that may impact the ultimate viable yield and product price.
Many plants have similar leaf compositions, colors, and shapes, making it difficult to label them using the human eye. Farmers can now rely upon ML to assess complex patterns and accurately identify related plant and weed species.2
Digital identification of plant species saves farmers time, allowing them to increase productivity in other critical areas.
CROP DISEASE AND WEED DETECTION
ML-driven image processing allows farmers to rely upon digital tools to recognize weed species and to determine which crops are healthy and which ones are infested with disease caused by fungi, bacteria, or viruses.
The ability to identify weeds with digital tools makes it possible to train mechanical devices (robots) to pull weeds from fields, protecting the environment from damage caused by pesticide use and saving farmers time, effort, and money.
Additionally, digital applications that can evaluate crops for disease can also provide an accurate disease diagnosis and recommend an optimal treatment plan.
This technology helps farmers avoid settling on a one-size-fits-all solution that not only fails to address a specific disease, but may inadvertently cause undesirable side effects, such as pollution or bee-population reduction. It also allows the companies that manufacture crop disease treatment products to better serve their customers.
Today’s farmers rely upon ML-enhanced technology to:
Map and estimate yields
Better meet demand without unnecessary waste
Make smarter harvesting and pricing decisions
Identify and automatically remove harmful weeds
Find and treat crop disease with targeted solutions
Accurately classify weed species
Increase productivity, save time, and operate more economically
… and the list continues to grow!
Those who work with livestock are also experiencing time and cost savings as a result of ML-driven technology.
ML technology currently improves farming and ranching operations in a variety of areas, including livestock health maintenance, dairy and egg production, animal herding, and selective breeding.
LIVESTOCK HEALTH MAINTENANCE
ML helps farmers maintain happy and healthy herds of cattle.
In one use case, a company in Amsterdam uses sensors to monitor cow behavior. ML analysis of the gathered data predicts fertility patterns, diagnoses eating disorders, and alerts farmers to signs of heat stress.4
In another example, a dairy cow’s health status can be evaluated by applying deep learning algorithms to images of white blood cells extracted from the cow’s blood or milk. This process provides indications of certain health issues earlier than simple observation provides. Farmers can then initiate appropriate wellness measures before antibiotics become necessary.4
DAIRY AND EGG PRODUCTION
ML-driven data analysis also helps ranchers optimize operations to more accurately and efficiently manage production of milk, eggs, and other foods.2
For example, the same technology that gathers and analyzes data on dairy cow activity also allows farmers to make operational decisions that improve milk output by up to 30%.5
Autonomous robots aren’t just monitoring fields of crops for weeds and diseased plants. They’re also being trained to herd cattle and sheep.
In addition to physically herding animals toward a desired destination, these ML-driven devices can haul heavy objects from one place to another and cooperate with drones to relay critical information to farmers.6
Selective breeding involves the use of genetic data to optimize livestock pregnancy rotations and encourage the perpetuation of favorable traits, such as milk quality, disease resistance, fertility, and more.
Selective breeding is not new. Ranchers have relied upon observable factors to produce livestock lines that possess desirable characteristics for centuries. Today, with the assistance of ML-supported sensors and applications, a vast amount of data regarding genetic molecular markers, environment, feed makeup, birth patterns, and more can be analyzed, and ranchers are able to make livestock-mating decisions with significantly more accurate results.
Ranchers rely upon ML-enhanced technology to:
Analyze livestock behavior patterns and provide better care and feeding recommendations
– Diagnose disease and preemptively apply appropriate treatments
– Optimize milk and egg production
– Herd cattle and sheep
– Perform manual labor
– Evaluate and communicate farm-condition data
– Predict fertility patterns and improve breeding selection to produce hardier stock
– Increase productivity, save time, and operate more economically
AGRONOMY, BREEDING, AND BIOTECHNOLOGY
Agronomy, breeding, and biotechnology include practices that help improve agriculture operations and outputs, including water and soil management, hybrid plant optimization, and sustainable agrochemical production and application.
WATER AND SOIL MANAGEMENT
Through ML-assisted analysis of precipitation and evapotranspiration (the process by which water transitions from soil and plant transpiration to the atmosphere), technologists develop more efficient resource management procedures and irrigation systems.2
ML is equally well equipped to analyze data regarding soil conditions, including moisture level, temperature, and chemical makeup, all of which have an impact upon crop growth and livestock well-being.
As with humans, plants’ characteristics are determined by their genes. Certain genes help plants absorb water and nutrients better than others, while others help them fight disease more effectively. Some genes even affect how a plant may end up tasting!
An entire industry revolves around developing commercial seed products that combine the best features of various plant strains.
Without the aid of ML-based technology, a single hybrid development cycle can take scientists seven or eight years (although this is still faster than the speed at which nature performs the process!).
By evaluating masses of data on plant performance in various conditions over time, ML algorithms help scientists better optimize the identification of biotech traits needed to profitably increase yields, given the likelihood of harmful environmental factors, such as unfavorable weather conditions and insect populations, in a given season. This optimized use can also improve the longevity of these hugely beneficial and expensive-to-create biotech traits by reducing resistance buildup.
Simply put, ML helps scientists make predictions regarding which gene combinations will lead to desirable traits in new plants, providing an excellent starting point for developing hardier (and possibly more flavorful!) plant species.
AGROCHEMICAL PRODUCTION AND APPLICATION
One of the key strengths of ML technology is predictive analysis. Companies that develop chemical and biochemical products for farmers and ranchers, such as pesticides, crop disease treatments, antibiotics, microbials, and more, rely upon ML to assist them in ensuring that product efficacy is maximized and that the environmental impact of the products they place on the market is minimized.
Additionally, as mentioned earlier, ML simplifies and streamlines the entire crop-disease identification, diagnosis, and treatment process.
The mobile app capable of providing farmers plant disease diagnoses in real time also delivers product recommendations. Order fulfillments can then be handled via ML-assisted digital processes, and spray drones programmed to recognize those crops that require treatment can pinpoint delivery of the right products to the right plants without disturbing neighboring, healthy or unaffected plants.
Plant disease isn’t the only threat to crops and livestock that ML helps professionals combat. By using a similar image-analysis algorithm, pest control companies provide their associates a reliable, real-time tool to identify bugs, allowing them to provide targeted extermination services.7
AGRONOMY, BREEDING, AND BIOTECHNOLOGY SUMMARY
Technologists rely upon ML-enhanced technology to:
– Analyze and optimize water and soil resources
– Develop better hybrid plant species
– Reduce harmful environmental impacts of pesticides and other products
– Remotely diagnose crop disease and provide targeted solutions
– Deliver crop disease treatments and pesticides with pinpoint accuracy via spray drones
– Identify pests and provide targeted solutions
– Increase productivity, save time, and operate more economically
Borrowed with permission from the thought leaders at Object Computing Inc.
For more information contact firstname.lastname@example.orgRead More
There is a time and a place for virtual, self-paced training solutions, but if you want to build confident teams and lower the risk of project failure, instructor-led training is key.
With the goal of helping our clients look at the human side of analytics programs, we advocate the use of training for data teams. Although self-paced training is available for all concepts and technologies, there are cases when instructor-led is much more effective. This pertains to public classes but becomes even more valuable with customized training developed specifically based on an organization’s goals and requirements. Benefits of instructor-led training include:
- Enhanced comprehension of training materials based on human engagement.
- Encourages team member collaboration and ideation.
- Unlike virtual training, students have the ability to ask for further clarification, 1×1 with the instructor if needed.
- Networking and “different points of view” if the training is public.
These benefits also apply to virtual instructor-led training that is sometimes the only viable solution for global teams.
It is also key that you consider training over and above technical/certification training. Great Data Minds has gathered together (and continues to add) instructor-led workshops for data teams. See our live training workshops here.Read More
Reality or not, the perception nowadays is that data modeling has become a bottleneck and doesn’t fit in an agile development approach. Plus with NoSQL being “schema-less”, perception often is that there is no need for data modeling ahead of coding. You may pretend that it is not happening. Or blame complexity, speed of change, culture, or developers’ mentality. Or argue that data modeling is actually agile.
In the meantime, data modelers feel left out of the development process… because they are! They fear for their jobs, long term if not sooner. This is a recurring theme we sense at every Fortune 500 company across the US and Europe when we give our training ‘Agile Query-Driven Data Modeling for NoSQL’.
The reality is that data modeling needs to be re-invented in order to remain relevant. And since there is so much baggage associated with the term “data modeling”, maybe we should give it a less threatening name, such as “schema design”?
Here, the purists generally stop me to say: “Wait, you can’t go straight into physical modeling without doing first the conceptual then logical models.” Well… maybe, but that’s part of the issue. If you can’t demonstrate that you facilitate speed to market, then you’re viewed as being in the way, and autonomous agile teams will try to get around you.
Logical modeling is counter-productive (for NoSQL)
Working our way backwards in the traditional sequence: conceptual -> logical -> physical, we all know by now that schema design is actually more important with NoSQL than with relational databases, since JSON is so powerful and flexible, but not so forgiving.
Logical modeling makes sense when aiming to achieve an application-agnostic database design, which is still best served by relational database technology. But when designing a NoSQL database, which should be application-specific to leverage the benefits of the technology, it becomes apparent that logical modeling is a counter-productive step. Since logical modeling is supposed to be normalized while NoSQL schema design will be mostly denormalized, why go through the logical modeling exercise at all?
Some sort of conceptual modeling continues to be required to document the understanding and blueprint of the business. But when dealing with NoSQL and agile development, we propose that Domain-Driven Design should replace conceptual modeling. Then, driven by business rules and application screens, reports and queries, we can map directly from domain aggregates in bounded contexts of DDD to the design of the NoSQL physical schema, thereby bypassing logical modeling.
Domain-Driven Design helps avoid “big balls of mud”
Creating an enterprise model is achievable for the initial incarnation of software systems. But without care and attention, inherent domain and technical complexity will, over time, turn monolithic applications into a pattern known as the “big ball of mud“. Change is risky, and the best developers spend valuable time fixing technical complexity and technical debt, instead of adding value in domain evolution.
Domain-Driven Design is a language- and domain-centric approach to software design for complex problem domains. It recognizes that over time, an enterprise conceptual model will lose integrity as it grows in complexity, as multiple teams work on it, and as language become ambiguous. With DDD you decompose complex problems so you can be effective at modeling bounded contexts that are defined with unity and consistency. DDD promotes the use a Ubiquitous Language to minimize the cost of translation between business and technical terminology and to enable deep insights into the domain thanks to a shared language and collaborative exploration during the modeling phase.
DDD consists of a collection of patterns, principles, and practices that enable teams to focus on what’s core to the success of the business while crafting software that tackles the complexity in both the business and the technical spaces. One such pattern is an aggregate, a cluster of domain objects that can be treated as a single unit, for example an order and its order lines.
Domain-Driven Design maps directly to the concepts of Agile and NoSQL
There’s nothing in agile to suggest that one should skip design. It suggests that design should be evolutionary and iterative. DDD also encourages an iterative process, first at a strategic level to divide the work and focus on what’s important to the business, then at a tactical level to understand the details of each bounded context.
On the database side, relational modeling is vastly different than the types of structures that application developers use. Database joins slow down performance and lead to object-relational impedance mismatch, causing developers to move away from relational modeling and towards aggregate models. When an aggregate is retrieved from the database, the developer gets all the necessary related data, thereby facilitating manipulations.
A NoSQL document structure corresponds to the structure of a programming object in a much better way than a relational database does, and at the same time, can closely represent DDD aggregates of domain objects.
Back to our proposal that logical modeling should be avoided, why would you break down domain aggregates into normalized entities, only to re-assemble them again during the physical schema design process?
If you had a logical model, how would you go about doing your NoSQL schema design with no knowledge of what queries and reports will look like? In other words, how would you perform entities aggregation without the context of the application screens and their content?
Document schema design
Having defined the aggregates of a bounded context, it is necessary to create additional artifacts: mainly a pragmatic charting of workflows and business rules (not a full BMPN that would be hard to produce, maintain, and digest), plus mockups (or wireframes) for application screens and reports. What’s important here is to not fall in the same traps as reviewed earlier with enterprise data models! But the creation of these artifacts tends to reveal points of attention that may have been overlooked in the DDD phase.
Based on the above streamlined process, the actual schema design step should be clearer. But the flexibility and power of JSON is the next challenge. It seems so intuitive at first that is easy to overlook the potential traps.
Say you’ve agreed to denormalize and aggregate information into one document. The next question is “how?” There are probably as many different ways to do it as you have members on your team: do you embed locally all related entity data? Or do you embed a partial duplicate or snapshot of remote entity data? Or do you refer to remote entity data, with one- or two-way referencing?
Here are a few factors influencing choices in relationship expression:
- cardinality: does high cardinality lead to practical or technical issues?
- strength of entity relationships: do they all conceptually belong together?
- query atomicity: what info needs to be returned together?
- update atomicity: must it all change together?
- update complexity: what’s the impact if data is duplicated? How do we avoid data inconsistency?
- document size: how much time will it take to load? Are we in a mobile environment where data traffic matters? Will the document size grow indefinitely?
- coding complexity: does it all make sense in the code?
The added-value of Data Modelers
Beyond the provocative nature of the headline, the exercise of designing a NoSQL database is obviously far from trivial. The dynamic and evolutive nature of a JSON structure is a wonderful opportunity that should not be spoiled by a careless approach. While developers are certainly capable of doing their own schema design, is it really the best allocation of resources? In enterprises dealing with any kind of application complexity, it becomes quickly obvious that data modelers can be tremendous contributors to the quality of agile development.
Years of experience in data modeling of relational databases have trained them to naturally:
- focus on the core business use case
- create pragmatic models without being over ambitious or perfectionist
- reveal hidden insights and simplify
- experiment with different designs to reach a flexible solution
- challenge assumptions and look at things from a different perspective
- facilitate the dialog between application stakeholders
Data modeling is no longer an exercise taking place just in the early stage of an application lifecycle. Data modeling is now part of the iterative agile development and continuous integration loop, adding value every step of the way.
Even in production, data modeling is used to reverse-engineer all production NoSQL databases to discover new fields and structures that may have been added, providing unique documentation of unstructured and semi-structured data – a critical factor in the context of GDPR and privacy regulations.
As usual when a major shift is under way, there are 2 possible approaches: resist change, or embrace it. Data modelers should not fear agile development. They should enthusiastically embrace change, become the developers’ best friends, and demonstrate their tremendous added value to achieve together higher quality applications.
:: Borrowed with author’s permission from the original post. ::Read More
“Nothing is as painful to the human mind as a great and sudden change.” – Mary Shelley British Novelist, Author of Frankenstein
Transformation is hard, but it’s a necessity in these accelerated, innovative times. At the forefront of transformation projects is technology and process change. Rarely does management take a hard look at the human side of transformation and this is often what will delay or even kill a project.
Human transformation takes investment, but is well worth the time and effort. Even with that investment, you can expect to lose an average of 30% of your team, as some will choose not to embrace your new direction. However, corporate knowledge is invaluable and if you can guide your current team through the changes, retaining key resources will always be the best route to success.
True story: At one of my past companies, we were awarded an MPP replacement project by a Fortune 500 organization. After working closely with management on requirements and a final deployment plan, we were asked to lead a project kick-off meeting with the current team that was supporting their Teradata environment. As we started to discuss the use of Cloud services and supporting agile tools for ELT you could feel the frustration rising in the room. One hour into a half-day meeting three of the team members explosively walked out. The project was put on hold for two months as the organization regrouped and took a hard look at the current team, gaps and the changes that needed to be made.
Our lesson as a consulting vendor was to help our clients early on with the human side of transformation. We came up with a few guiding principles that apply to all organizations.
1. Communicate Change Long Before the Project Begins
Assuming the team will just go along with change is the number one problem. Humans work much better with knowledge. Communicate to your teams starting with the ideation phase. Heck, even make them part of that phase through a Design Thinking session. Continue those communications throughout the project and even after deployment.
2. Education is Key
Develop a formal program for the education of your team. Start with basics such as what is driving this change. Not just in your organization, but also in the world. Technical training is key and in these times it means learning additional languages. There should also be an emphasis on agile. Consider “outside of the technical box” training such as Data Storytelling, or even enhanced visualization training. Sit back and watch how appreciative and excited your team is.
3. Promote Armchair Analytics
It is hard to ask your team to give you more than 40 hours, but there are those that thrive on learning new technologies. We made sure each team member had credits from top cloud vendors (usually the first $300 are free) so they could dive into the technology if they wanted to in their spare time. You will be surprised how many folks appreciate this and take advantage.
4. Stand Up Sandboxes with Meaning
We always had special team projects being developed in our Cloud environments that were governed by our management team. Resources were granted hours out of their regular 40 to develop IP for our company while learning new technologies and sharing ideas with their peers.
5. Encourage Mentoring
Inherently, humans want to help humans. Encourage mentoring across teams. We have named our mentoring program “Mentoring 360” to stress that it encourages resources to mentor each other, not just one way.
6. Spend a Bit More Time with Those that Are Struggling
We have seen meltdowns over and over again as innovation is introduced to the organization. There will be those that frustrated and even bitter. As a manager, take extra time for these resources. We have proven that one-on-one coaching can be helpful.
These are just a few recommendations. Of course, formalized change management can’t be ignored but humanizing transformation and implementing the above suggestions will produce results and make for a happier, much more effective team.Read More