AI Ethics Policy

Effective December 8, 2023

With the rapid growth of AI in the industry, we’re starting to see a lot of new AI code-generation and code-scanning tools in code hosting services, IDEs, and new code review products. For companies that have been working hard to address privacy and security issues, this means a number of new concerns.

While many of these AI tools show incredible promise and value, the downsides may not always be clear:

  • Some tools vacuum up all user-provided input and data for further training of their models.
  • Some tools may give results that can unintentionally conflict with another’s intellectual property, and there’s virtually no settled law related to the relationship of AI to copyright or other IP protection.
  • It’s only a matter of time before bad actors start to inject nefarious training data in order to do things like create backdoors in software written by AI models.
  • There have already been numerous high-profile cases of sensitive data being leaked by well-intentioned users.

At Beanbag, we focus on human-driven software development tools, such as Review Board, and do not currently utilize AI in our products. However, as we investigate the potential options for integrating AI in the future, we believe it’s important to do so with great consideration and responsibility, as our choices can impact the safety, security, and comfort of our customers.

To help guide us, and to help our customers know where we stand, we’ve written this AI Ethics Policy outlining our commitments to ethical AI use, emphasizing transparency, privacy, security, and a People-First approach.

1. Future Integration of AI

  • While AI is not a part of our current offerings, any future AI integration will be approached with caution and responsibility.
  • We commit to exploring AI solutions that align with our core values and the needs of our users.
  • Any AI features would be opt-in, complementing how people use the tool while also keeping human review and approval front and center.
  • We’re proud of our approach to supporting Review Board, and have no plans to use AI to offer support for our products.

2. Transparency and Informed Consent

  • In any future AI implementation, we will ensure transparency about the use and capabilities of AI features.
  • Users will be fully informed of how their data is used with any AI tools and will have full control over their engagement with these tools.
  • Just as we require administrator and user consent to send any personal information to a third-party service, we will require the same level of consent to send any intellectual property or user information to an AI tool.
  • Dark patterns will never be used to trick any users into offering consent.

3. Commitment to Privacy and Data Security

  • Protecting user data will remain a top priority in any future AI development.
  • We will strive to implement AI in a way that respects user privacy and data security, adhering to the highest standards.
  • Any AI capabilities will be opt-in, and administrators will have full control over what AI services and capabilities will be available on their server.

4. A People-First Approach

  • Any AI technology we consider will be designed to augment, not replace, a person’s expertise and decision-making.
  • We believe in the power, judgment, and creativity of people in the creation and review of code, documents, and other works. We’ll continue to prioritize that in how we build and support our software.

We see a benefit in using AI and other automated tooling to help humans better do their jobs, but that benefit does not outweigh safety and privacy concerns.

Any usage of AI in our products will be introduced thoughtfully and with care, aligned with our commitment to ethical practices. Our approach will continue to be guided by a commitment to the well-being and success of our users.

Beanbag and Review Board will always be People-First.