|Adm. – Grad.
Measuring performance improvement in software engineering methods of artificial intelligence and Analytics projects after cloud apps adoption
ERP adoption is well known to require engineering practices improvement. Cloud era platforms, with IaaS, SaaS, and PaaS, along with mix of proprietary and open-source code, are changing the efficiency, speed, scale, and innovativeness of how enterprise systems are developed and implemented. Instead of lengthy implementation with high risk, IT personnel finds new opportunities for creating valuable apps that enable faster end user change.
One key issue that remains elusive is to accurately measure IT team performance impact after moving to cloud apps. A model is proposed to explain performance improvement and help uncover configurations contingent on industry and organizational contexts.
Using an empirical software engineering perspective, we carry out interviews with, and survey, cloud app vendors, platform consultants, and adopters of leading cloud ERPs, such as Dynamics 365. We analyze the factors enabling or hindering successful innovation in development teams. Cloud offerings are compared to traditional ones to measure improvement in innovative outcomes.
Software engineering practices are identified along a spectrum of structured and agile methods. Team and deliverable performance are measured and correlated to various potential factors impacting innovation. Their configurations are categorized and serve to provide advice on cloud app development and adoption strategies.
The data analysis will also rely on ontology-driven automated text analytics to ensure coherence in inter-case comparison. We propose to use 3 advanced text mining and semantic annotation software packages: Stanford Protégé to develop our basic ontology; DKPro INCEpTION to perform manual ontology annotations; and ARDAKE to build rules-driven automated annotations.