As AI becomes standard, watch for these 4 DevSecOps trends

Image of a magnifying glass above balls to represent identifying bias in AI.

Image Credits: Hiroshi Watanabe (opens in a new window) / Getty Images

David DeSanto

ContributorDavid DeSanto is the chief product officer at GitLab Inc., where he leads GitLab’s product division to define and execute GitLab’s product vision and roadmap. David is responsible for ensuring the company builds, ships, and supports the platform that reinforces GitLab’s leadership in the DevSecOps platform market.

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

​​Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

Image of a magnifying glass above balls to represent identifying bias in AI.

As AI becomes standard, watch for these 4 DevSecOps trends

Image of a magnifying glass above balls to represent identifying bias in AI.

Image Credits: Hiroshi Watanabe (opens in a new window) / Getty Images

David DeSanto

Contributor

David DeSanto is the chief product officer at GitLab Inc., where he leads GitLab’s product division to define and execute GitLab’s product vision and roadmap. David is responsible for ensuring the company builds, ships, and supports the platform that reinforces GitLab’s leadership in the DevSecOps platform market.

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

​​Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

Proton founder and CEO Andy Yen

Proton picks up Standard Notes to deepen its pro-privacy portfolio

Proton founder and CEO Andy Yen

Image Credits: FABRICE COFFRINI/AFP via Getty Images / Getty Images under a (opens in a new window) license.

Switzerland-based Proton, the privacy-focused firm behind end-to-end encrypted (E2EE) webmail ProtonMail and other apps, has acquired Standard Notes, a note-taking app founded back in 2017. It offers the same kind of robust privacy promise to its 300,000+ users by also applying E2EE.

In a press release announcing the move, Proton emphasized the pair’s “shared values,” including the use of E2EE; a commitment to open source technology; and how neither has relied upon venture capital to drive growth.

E2EE is considered the gold standard of security technology, as service providers don’t hold encryption keys. This means they’re technically unable to decrypt user data, safeguarding users’ content behind a “zero knowledge” architecture. Put another way, you don’t have to trust the service provider not to snoop.

By adding Standard Notes to its portfolio of apps, Proton will deepen its reach with an engaged community of pro-privacy users, layering on additional cross-selling opportunities as well as boosting the utility of its app ecosystem.

The note-taking app fills an obvious gap in Proton’s current lineup.

Proton applies its flagship E2EE promise of robust security to a suite of products, including email, calendar and cloud storage. Additionally, it offers a VPN service. It launched its own pro-privacy and anti-censorship CAPTCHA service last year to further supplement its offerings, but it hasn’t had a dedicated note-taking app until now.

A key plank of the pair’s “community-focused” approach is a freemium strategy that aims to support wider product access through premium (paying) users effectively subsidizing free users. And while there is some usage overlap, a Proton spokesperson said that less than a quarter of Standard Notes users are already Proton users. So there’s room for cross-selling and further community building.

Proton said the Standard Notes app, which is available for both mobile and desktop, will remain “open source, freely available and fully supported.”

It also suggested that there will be no change to Standard Notes’ prices; its press release specifies that existing five-year subscriptions “will continue to be honored.”

“Standard Notes will remain an independent product and in due course both companies will open access to their products to each others’ users,” Proton added.

Commenting in a statement, Mo Bitar, founder and CEO of Standard Notes, talked up the sense of shared mission. “At Standard Notes, over the past seven years we have sought to create a place where people are free to think and write without the worry that someone is looking over their shoulder. That freedom is incredibly rare on the internet today, and something that we want to safeguard forever,” he wrote.

“To enable us to do this, we are excited to join forces with Proton — one of the few organizations that shares our ethos and is not only mission-driven but open sourced, self-sustaining and community focused. In Proton, we’ve found a partner that shares our laser focus on protecting privacy.”

Proton was founded back in 2014, but Standard Notes is only the second company it’s picked up. Instead, it‘s mostly focused on building products in-house to expand its range and grow usage (a year ago it announced passing 100 million users). This includes building on its first acquisition — email alias startup SimpleLogin, which it acquired in 2022 — as well as developing and launching fully fledged password manager app Proton Pass in June.

In that case, Proton leaned on the SimpleLogin team it acquired for the bulk of the product development. So the company is evidently not allergic to user acquisition and other consolidation-based growth opportunities where it sees enough philosophical overlap plus the chance to deepen its technical bank.

Proton announces Proton Pass, a password manager

As it folds Standard Notes into its deck, Proton aims to repeat the trick, saying it expects the Standard Notes team to make “vital contributions towards the creation and improvement of Proton’s ecosystem of existing and future products.” The wider goal is furthering the shared “mission” of “building a better internet where privacy is the default,” as its PR puts it.

“The deal is a strategic decision designed to benefit users by bringing to market secure, easy to use, private products that anyone can access,” Proton wrote. “Standard Notes and Proton engineers will begin working together immediately to ensure their combined skills and experience bear fruit for users as soon as possible.”

Proton founder and CEO Andy Yen confirmed the respective apps use different encryption schemes. “But that’s actually not a problem for integration, as it’s a separate app,” he told TechCrunch. “At a later point, we may make the accounts interoperable so that a Proton account can also log into Standard Notes and vice versa, as we did with SimpleLogin.”

Asked about the sustainability of pro-privacy business models that don’t rely on exploitation of user data — when so much of mainstream tech still continues to roll in the opposite, data-mining direction — Yen emphasized the need for long-term thinking by privacy startups. And for screwing courage to the sticking place.

“Competing with Big Tech is probably the hardest business challenge that exists today due to the unfair and abusive tactics utilized by tech giants to hinder competitors,” he said. “While recent actions, such as the EU’s Digital Markets Act or the DOJ lawsuit against Apple, may eventually level the playing field, it is going to take many years. It is a critical step in the right direction but won’t make an immediate impact.”

“That means you have to be a bit crazy to attempt this challenge today, and the only way to do it over the long run is to be doing it for the right reasons. The objective cannot be short-term or even mid-term financial outcomes, as those are likely to be challenging to achieve. Instead, you need to be mission-driven enough to survive the brutally difficult long game.”

Financial terms of the acquisition are not being disclosed.

ProtonMail buys email alias startup SimpleLogin

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

airtag stalking

Image Credits: Bryce Durbin / TechCrunch

Apple and Google announced on Monday that iPhone and Android users will start seeing alerts when it’s possible that an unknown Bluetooth device is being used to track them. The two companies have developed an industry standard called “Detecting Unwanted Location Trackers.” Starting Monday, Apple is introducing the capability in iOS 17.5 and Google is launching it on Android 6.0+ devices.

Users will now get an “[Item] Found Moving With You” alert on their device if an unknown Bluetooth tracking device is seen moving with them, regardless of the platform the device is paired with. 

The move follows numerous cases of Bluetooth trackers like Apple’s AirTags being used for stalking. Last May, Apple and Google announced that they would work together to lead an industry-wide initiative to create a way to alert users in the case of unwanted tracking from Bluetooth devices. 

When Apple launched AirTags, they were quickly adopted as a way to track the location of everyday items like keys. However, the trackers also ended up being adopted by bad actors who use them to track people. To address this, Apple released a “Tracker Detect” Android app in 2021 to help people who don’t own Apple products to identify unexpected AirTags near them. The new industry standard announced on Monday takes an OS-level approach to addressing the issue of unwanted Bluetooth trackers. 

Bluetooth tag companies including Chipolo, eufy, Jio, Motorola, and Pebblebee have committed that future tags will be compatible with the new industry standard, Apple says.

Apple and Google say they will continue to work with the Internet Engineering Task Force, a standards organization for the internet, via the Detecting Unwanted Location Trackers working group to develop the official standard for this technology.