DataMi Analytics  professionals are seasoned veterans with decades of analytic experience, and model development for practical applications.  We don't just throw math at our clients' problems; we work to gain deep familiarity with your business in order to ensure that everything we deliver is actionable. Doing this entails three phases of engagement:

    Assessment/Evaluation - We start with an immediate evaluation of the targeted business issues and data sources available to address them, and then perform initial analyses to assess the best direction for inquiry and potential for success.  We typically address questions such as:
     

    • What are the targeted business pains and the business problem at hand? How would you ,measure a successful solution?

    • What data is readily available for analysis? What about 3d party or outside data?

    • What are your physical data sources, and what challenges do we face in aggregating that data for analysis?

    • What analysis and modeling approach should we take? Which data will we use to address which questions?
       

    We end assessment by developing a going-forward plan with our clients based on this information and initial findings.

    Core Analysis -This is where we aggregate the full body of data and perform the core analyses per plan. We interact extensively with our clients during this phase, presenting early findings and gaining input on our interpretations. Findings are typically reviewed by DataMi's Chief Scientist for more extensive analysis and modeling. In this manner every DataMi client benefits from the full depth of our advanced thinking and broad analysis experience.

    Reporting and Simulations - A successful analysis will produce one or more simulation models - graphically displayed animations that predict targeted outcomes based on the values of causal variables. This can be used to predict either the targeted variable or, working in reverse, the values its causes might be expected to have given the value of the target variable.
     

    Bayesian Simulation Model

    Causal Modeling - Based on machine-optimized application of Bayes' Theorem, differs  fundamentally from statistical analysis in goals and intent. We use statistics to assess whether a specific set of empirical findings could have occurred by chance alone and can be generalized to broader populations. We use machine learning to find conditional dependencies  within large data sets without regard to whether those findings could be generalized to a broader population. To generalize to
     

    a broader population we would apply machine learning to multiple data samples and then use statistics to determine the consistency of findings across those samples.

    The use of machine learning for analysis is dynamic, the same analysis being performed on thousands or even millions of subsets of the data to find the most robust solution. So, for example, when we refer to "random forest techniques" the machine is, in the blink of an eye, calculating thousands of random classification trees to find the most robust subdivisions of the data for the best causal hierarchy.

    You get a full data engineering and analytics team from day one with advanced tools, experience and business acumen.

    Finally, we also work with our clients to incorporate their simulation models into workplace decision support solutions.