AI for Business Workplace Guidelines & Best Practices.

A set of practical workplace guidelines to support the safe and responsible use of AI at work

AI Governance

  • Establish an AI Governance Committee consisting of representatives from different departments such as IT, HR, Legal, Operations, and Senior Leadership. This committee will oversee the ethical use of AI, ensuring it aligns with company values.

  • Define a decision-making process for approving AI initiatives and projects.

  • Set up a system for logging AI activities for audit and control purposes.

Data Management and Privacy

  • Ensure compliance with data protection laws. This includes the General Data Protection Regulation (GDPR) and New Zealand’s Privacy Act 2020.

  • Secure consent from data subjects before collecting and processing their data.

  • Incorporate data anonymisation techniques where applicable to maintain privacy.

  • Implement strict data access controls and maintain logs of data access and usage.

Transparency

  • Be clear about the purpose and use of AI. Document the rationale, decision-making process, and potential benefits and risks of each AI application.

  • Regularly update staff and stakeholders on how AI is being used, the data it's using, and the effects it may have on their roles or business operations.

  • Develop a policy to share information about AI applications, without breaching security or proprietary information.

Bias and Fairness

  • Conduct regular audits of AI systems to identify and mitigate any biases.

  • Promote diverse and representative data sets to minimise bias in AI outcomes.

  • Establish a process for employees and stakeholders to raise concerns about AI fairness and decision-making.

Employee Engagement and Training

  • Implement ongoing training and education programs for employees to acquire the necessary skills to work with AI technologies.

  • Promote an AI-positive culture, encouraging employees to contribute ideas for AI implementation.

  • Communicate with employees about AI’s role in the company, addressing concerns about job displacement and emphasising how AI can augment their work.

Security

  • Establish stringent security protocols around AI data and infrastructure.

  • Regularly conduct risk assessments to identify and rectify potential vulnerabilities.

  • Implement an incident response plan for any potential AI-related security breaches.

AI - Workplace Best Practice.

This table is best viewed on a desktop.

Policy Area

AI Tool Authorisation


Open and transparent

Data Security

Third-party AI Apps

AI Training Data

Misuse of Data

Regular Audits

AI Ethics Training

Partnerships

Continuous Learning

Guidelines

Only use AI tools authorised by the company's IT department. Unauthorised software can pose a security risk.

Promote openness and transparency in AI deployments. This includes clarity in how AI models make decisions and how data is used.


Use encrypted connections when handling company or customer data. Never share sensitive data outside secure channels.


Only use third-party AI applications approved by IT. Beware of potential security vulnerabilities in non-vetted apps.

Data manipulation or misrepresentation, especially when using AI tools, is strictly prohibited.

Regularly perform security audits on AI tools to ensure they are still secure and effective.

Regular training to understand the ethical implications and responsibilities of using AI in a business context.

Partner with AI companies or research institutions.

Foster a culture of continual learning, development, and change that reflects the fast pace of AI development.

Fostering a culture of responsible AI use requires ongoing education and a commitment to ethical behavior. Your collective vigilance will ensure that the tools your organisation utilises will enhance productivity without compromising your values or the security of your data.

Practices and Control

Implement a system for AI tool requests, vetting, and approval through IT. Keep an updated list of authorised AI tools for reference.

Use AI explainability tools and techniques to understand how AI models work. Maintain clear documentation of AI processes, and regularly communicate about AI use within the company.

Use VPNs and secure file transfer protocols. Conduct regular training on identifying and avoiding phishing attacks.

Never upload customer data containing personally identifiable information.

Never upload proprietary code or other intellectual property.

Maintain a list of approved third-party AI apps. Carry out regular reviews and security assessments of these applications.


Anonymise any customer data used for AI model training to maintain privacy. Always obtain consent where required.

Use anonymisation tools and techniques. Document consent and have a process to handle requests for data erasure in compliance with GDPR and other regulations.


Monitor data usage patterns to detect potential misuse. Implement penalties for violation of this policy.


Conduct quarterly audits of AI tools, using either in-house IT or a third-party firm.


Provide mandatory training on AI ethics for all employees. Revisit this training regularly..


These partnerships can provide expertise, resources, and new perspectives.


Employ constant feedback loops through internal and external use of your AI systems and technical AI capability to nurture a culture of innovation.


Responsible AI Framework

Auror's Responsible Tech & AI framework and documents help advance the safe and responsible use of AI. They have made their framework available to all for free.

Do you need a Chief AI Officer?

An AI-first approach implies that a company's strategies, culture, and operations are centered around leveraging AI across all business areas. The Chief AI Officer is integral to directing a company toward becoming AI-first. Here's how:

Strategic Leadership

The CAIO develops an AI roadmap and strategy that weaves AI into the core fabric of the business, ensuring that AI isn't just an added feature but is central to problem-solving and innovation.

Advocacy

The CAIO champions AI internally and externally, advocating for its use to stakeholders, partners, and clients. They drive adoption by highlighting success stories and illustrating the transformative potential of AI.

Cultural Transformation

The CAIO works to foster an AI-positive culture within the organisation. This involves training and educating staff about AI, promoting its benefits, and encouraging a mindset of continuously looking for opportunities to implement AI solutions.

Ethics and Compliance Oversight

The CAIO ensures that AI is used responsibly and ethically, helping build trust in AI systems within the company and with its customers.

Cross-Functional Collaboration

To truly become AI-first, AI integration has to occur across all business functions. The CAIO collaborates with different departments to identify opportunities for AI integration and improve existing processes.

Keeping Pace with AI Development

The AI landscape is continually evolving. A CAIO must stay updated on the latest developments, helping the company adapt and integrate emerging AI technologies effectively.