How will AI disrupt a company’s target market?
I believe most every company’s market is entering a phase of disruption. There are several ways existing and emerging AI plays a part in this:
Altering marketplace dynamics by empowering new entrants (lower barrier to entry).
Enabling more personalized offerings and experiences.
Streamlining operations and reducing costs through automation.
Rapid pace of new, innovative technology brings opportunity for new entrants to capture market share as the larger incumbents in the market are typically slower to adopt
Think Netflix overtaking Blockbuster, iTunes overtaking Tower Records, Apple overtaking Blackberry…the list goes on and on.
@gtp allows you to link prompts, how does this change the way it will change usage?
I believe this question relates to the ability for ChatGPT and similar LLM tools to hold the context of the entire chat as opposed to simply the most recent prompt as traditional “chatbots” have done. This is a truly a game-changer! This moves from a simple pre-programmed Q&A to an actual conversation with the technology, which can recall earlier information in the conversation, understand context, and make assumptions (predictions) on new prompts based on the content of earlier prompts. In short, this becomes a true conversation and not the frustrating “sorry, I didn’t understand your question” experience most have come to know previous chatbots for.
Given the information sensitivity and pending Federal guidelines around AI, how are you evaluating tools for more long-term initiatives? Is there an industry standard that is forming on healthcare-specific implementations?
This is still a very evolving landscape, specifically with Generative AI. The reality is the technology is so no and so rapidly changing that an answer given last year is no longer the same answer, and today’s answer won’t be the same in six months. What I’m seeing in application is companies addressing this through a combination of:
AI Governance: Standing up committees to strategize, evaluate, and manage AI through considerations such as data sensitivity, ethical considerations, bias, interoperability, supportability, scalability, and accuracy thresholds.
Information Security: It seems many companies, especially larger ones, are looking to delegate much of this to established, trusted vendors with the appropriate security certifications by using their secure platform environments as opposed to publicly available tools. In the world of generative AI, these largely the major cloud vendors such as Microsoft Azure, Amazon Web Services, Google, and Oracle. You also have emerging services, such as OpenAI’s direct Enterprise offering.
Certifications and security major vendors typically offer include:
HITRUST: Ensures compliance with healthcare security standards.
HIPAA: Essential for handling protected health information.
ISO 27001: Demonstrates comprehensive information security management.
SOC 2: Assures security, availability, processing integrity, confidentiality, and privacy controls.
PCI DSS: Required for securing payment card transactions.
FERPA: Relevant for protecting student education records.
NIST: Adherence to authoritative cybersecurity guidelines.
Is anyone leveraging Azure data lake and if so how is the data security applied over the llm?
Yes, Moser Consulting has implementations on Azure ADLS and Azure Open AI Service. With this technology combination, we leverage RBAC and ACLs for managing access to both the LLM and ADLS. Additionally, Azure provides built-in features which allow us to ensure the security of custom solutions; these include the ability to encrypt data at rest and in transit, the ability to utilize customer-manager keys for this encryption, and MFA through Azure AD.
Moser Consulting offers a fully-managed data and analytics platform, Honeycomb, leveraging this technology stack and empowering organizations to have a modernized, scalable platform to support D&A initiatives, BI development & reporting, as well as advanced analytics such as AI & ML, without having to build their own infrastructure.
Additionally, we partner with organizations to build their own data ecosystem, set their AI strategy, and plan roadmaps for the adoption of AI and automation. If you’d like more information about our services or insights into our security measures, please send a message and we’ll gladly engage in a conversation!
The following results are from ChatGPT:
The OpenAI connector in Azure applies data security measures through various mechanisms to safeguard the confidentiality, integrity, and availability of data. Some of the key security practices include:
Encryption: Data transmitted between Azure services and the OpenAI connector is encrypted to prevent unauthorized interception or tampering. Transport Layer Security (TLS) encryption is commonly used to secure communications channels.
Access Control: Role-based access control (RBAC) and Azure Active Directory (AAD) integration are utilized to manage access to the OpenAI connector resources. This ensures that only authorized users or applications can interact with the service and access sensitive data.
Network Security: Azure employs network security groups (NSGs), firewalls, and virtual private networks (VPNs) to restrict inbound and outbound traffic to the OpenAI connector. Network-level protections help mitigate the risk of unauthorized access and cyber attacks.
Data Isolation: Azure implements measures to isolate customer data within the OpenAI connector environment, preventing unauthorized access or leakage between tenants. Logical and physical separation mechanisms are employed to ensure data confidentiality and privacy.
Data Governance: Azure provides tools and services for data governance, including data classification, labeling, and monitoring capabilities. These features help organizations track and protect sensitive data assets within the OpenAI connector environment.
Compliance Certifications: Azure maintains compliance with various industry standards and regulations, such as SOC 2, ISO 27001, HIPAA, and GDPR. Compliance certifications demonstrate Azure's commitment to data security and regulatory compliance when using the OpenAI connector.
Security Monitoring and Logging: Azure offers robust logging and monitoring capabilities to detect and respond to security incidents in real time. Security logs, audit trails, and anomaly detection mechanisms help identify potential threats and vulnerabilities within the OpenAI connector infrastructure.
By implementing these data security measures, the OpenAI connector in Azure aims to provide a secure and trusted environment for organizations to leverage AI capabilities while protecting their sensitive data assets.
As AI continues to advance will coding/creating scrips start to become obsolete for end users to create? In what scenarios could it still apply if at all?
Similar to our conversations about most all business processes, I believe it will evolve, but not become obsolete. What we’ll start to see with many roles is the “easy” tasks, decisions, and content creation will be quickly automated through the use of these tools, with there still being a human need to review, correct, and complete the more complex items. I think programming is already seeing this through this technology. Just like any other role, there is a need to up-skill to stay competitive in the market.
How is AI getting into the real medical field automation for example AI which can look up for Lung Cancer and help doctors get the assessment faster ?
There are so many amazing examples of health systems advancing in this space. Here are a few examples:
Mayo Clinic is using Generative AI with Google Cloud to create a Chatbot to receive questions, pull information from internal web pages and documents and summarize information from EHRs to form questions
“For instance, a clinician could ask the AI chatbot if the patient is a smoker, and Google's tool could find text in a patient's record that reads "patient consumed tobacco five years ago," according to Dr. Anantraman “
UPMC is using predictive analytics AI to identify pre-surgical risk as well as post-discharge success predictors to prevent readmissions
Mayo Clinic introduced AI which can detect pancreatic cancer around 475 days before traditional clinical diagnosis.
UT Health is building a generative AI model which can help physicians process large amounts of information to predict heart attacks.
Concern using AI & keeping your data or conversation private. I know there are “privacy promises” but your data & conversations are out there.
This is absolutely true with the publicly accessible models. There are ways to ensure data privacy and security mentioned in the responses above, however. Say, for example, you’re using Microsoft Azure’s platform to access OpenAI’s LLM model securely. This is protected by the same security standards and certifications as any other private data you’re trusting Microsoft with within your business.
Azure has partnered with Open AI to provide the Azure Open AI Service. This service empowers you to have your own instance(s) of GPT models deployed within your tenant, mitigating concerns regarding the exposure of confidential data in a public model. Regarding the conversation itself being private, Azure does review the prompts submitted to your model to ensure there are no bad actors utilizing Azure products. Additionally, there is a form available for requesting that your conversations remain private.
As AI continues to advance will coding/creating scrips start to become obsolete for end users to create? In what scenarios could it still apply if at all?
While AI may change the landscape of coding and scripting, it's unlikely to completely replace humans who code. Instead, it will augment and enhance the capabilities of developers. Some examples of how the role of developers/coders could be enhanced include:
Automated code generation: AI can assist in generating code snippets, templates, or even entire functions based on input requirements or specifications. This can help developers speed up the coding process, especially for repetitive tasks or boilerplate code.
Code Optimization: AI algorithms can analyze existing codebases to identify inefficiencies, redundancies, or opportunities for optimization. This can lead to faster and more efficient code execution, improved performance, and reduced resource usage.
Reducing errors/Bug-detection: AI-powered tools can detect and even automatically fix common bugs and errors in code. By continuously analyzing code changes and execution patterns, these tools can help developers identify and resolve issues more quickly, leading to more robust and reliable code.
Code Reviews: AI can assist in automating code reviews by analyzing coding standards, best practices, and potential issues. This can help developers ensure code quality and consistency across projects, while also freeing up time for more critical tasks.
Documentation: AI-powered NLP models can automatically generate documentation from code comments, function names, and other contextual information. This can help engineers maintain up-to-date documentation with less manual effort, improving code comprehension and collaboration.
Predictive Maintenance: AI can predict potential software failures or performance issues by analyzing historical data, system logs, and usage patterns. This allows developers to proactively address issues before they impact end-users, reducing downtime and improving overall software reliability
Overall, AI has the potential to revolutionize the software development process by augmenting the capabilities of engineers, reducing manual effort, and improving overall productivity and code quality. However, the importance of humans will persist; their role remains essential for evaluating output, refining prompts for generation, and making critical decisions throughout the process. AI will augment the role of engineers, allowing them to generate code more efficiently and more accurately, but their skills are still highly valued and required. There will always be a need for human developers to design, train, and maintain these AI systems. As AI continues to advance, it's likely that there will be shifts in how coding and scripting are approached, but they will definitely not become completely obsolete.
In what other ways will AI impact code generation?
Over time, as engineers become more skilled at leveraging AI for code generation, organizations may need fewer engineers to produce the same quantity of code due to increased efficiencies of engineers being empowered by AI. However, humans will be significant in ensuring that AI is used in a responsible, transparent, and human-centric way.
Low-code and no-code platforms: These platforms are already gaining popularity, allowing users to build applications with minimal coding knowledge. As AI advances, these platforms may become even more sophisticated, enabling users to create complex applications without writing extensive code – making code generation more accessible and efficient for a broader range of users.
Skill improvement: AI can also help end users and human programmers to learn and improve their coding and script creation skills, by providing feedback, guidance, and tutorials.
Responses by:
Adrienne Watts, MBA, Vice President | Executive Consultant | Data & Analytics Moser Consulting, Inc.
D.J. Plavsic, Executive Director – Time of Service | Revenue Cycle: System Patient Access (SPA) | Indiana University Health
Learn more from our experts about AI and other topics
ASCII Anything is a weekly deep dive into all things tech. Join us every Wednesday to find out who from Moser Consulting's more than 200 resident experts we'll be talking to & what they're focused on at the moment.