Gogoro delays India plans due to policy uncertainty, launches bike-taxi pilot with Rapido

Gogoro India launch in New Delhi in December 2023

Image Credits: Jagmeet Singh / TechCrunch

Taiwanese electric two-wheeler maker Gogoro has deferred its highly ambitious plans for India, as New Delhi has not yet launched an anticipated scheme for battery swapping, a company executive said. In the meantime, the company has started a bike-taxi pilot with aggregator Rapido to test its vehicles before their commercial release.

Gogoro is “forced to wait for the finalization of incentive schemes” from the Indian government before ramping its vehicle sales and battery pack production in the country, co-founder and CEO Horace Luke said during the company’s Q2 earnings call on Thursday.

“We had forecasted revenue from India for 2024, but due to the delay in implementation in subsidies to include battery swapping vehicles, most of it is now projected for 2025,” the executive told investors.

Luke also underlined the company was working with the Indian heavy industries ministry to ensure that the expected iteration of the government’s Faster Adoption and Manufacturing of (Hybrid &) Electric Vehicles (or FAME 3) will offer the same benefits to battery-swapping vehicles and infrastructure that it gave to charging electric vehicles earlier.

In 2019, the Indian government released FAME 2 with a budgetary allocation of $1.19 billion (10,000 crores Indian rupees) to provide subsidies to EV buyers in the country. It was expanded to over $13 billion in February this year, though the scheme lasted until March 31.

Despite delaying its original plans, Gogoro is bullish on India, as its homegrown market is stagnant. In December, the company launched its battery-swapping network and three smart scooters in the Indian market to begin its expansion.

“We are still operating at a loss and still investing for growth because we believe the markets that we are targeting. India, Southeast Asia, and other markets are ripe for electric vehicle disruption,” Luke said on the earnings call.

Gogoro has launched a pilot program with ride-hailing startup Rapido, the executive said, without disclosing further details.

Rapido co-founder and CEO Aravind Sanka confirmed to TechCrunch that the pilot is currently live in New Delhi, with plans to have around 1,000 Gogoro vehicles.

Depending on the pilot’s success, the companies will decide on its expansion, Sanka said.

Gogoro started looking at India as its next big market in 2021 by tying up with Indian automobile giant Hero MotoCorp. It also invested $1.5 billion in Maharashtra last year and backed EV fleet management startup Zypp Electric in its $25 million round to test operations in the country.

On the earnings call, Luke said Gogoro is “actively collaborating with five Indian local electric two-wheeler OEMs and have commenced vehicle testing for the deployment of these powered by Gogoro network solutions” in the country.

“These collaborations bring to market a variety of products at lower price points, and the initiation of testing these solutions marks an exciting step forward in expanding our presence and providing a wider range of vehicle options to B2B customers in India,” he said.

In an interview with TechCrunch last year, Luke stated the company had invested “tens of millions” of dollars in India and is set to put more money in.

In Q2, Gogoro saw over 6,500 backlog orders for its Pulse and JEGO vehicles, valued at $12.3 million. However, the company noted in its 2024 guidance that the Taiwanese two-wheeler market is softer and “strong sales” of its JEGO put the average sales price pressure.

Why Gogoro picked India as its new go-to market

Twitch Coin warp

Twitch attire policy update shuts down the viral topless meta

Twitch Coin warp

Image Credits: Bryce Durbin / TechCrunch

Twitch is effectively banning the “topless meta” and other implied nudity streams with another update to its attire policy.

Under the new policy, announced on Wednesday, streamers are no longer permitted to “imply or suggest that they are fully or partially nude,” and may not show a visible outline of their genitals, even if they’re covered. Covering breasts or genitals with objects or censor bars to suggest nudity is also prohibited. Female-presenting streamers may show cleavage, as long as their nipples and underbust are covered, and “it is clear that the streamer is wearing clothing.”

The update is in response to the rise of popular streams known as topless or “black bar” meta, in which streamers appeared naked by using clever framing or black censor bars to cover their breasts and genitals. Although the content didn’t technically violate Twitch’s attire policy forbidding actual nudity, and was properly tagged for “Sexual Themes,” the streams were still controversial in the Twitch community.

https://twitter.com/payowow/status/1735338521022333359

“For many users, the thumbnails of this content can be disruptive to their experience on Twitch.” Twitch’s Chief Customer Trust Officer Angela Hession wrote in a blog post about the update. “While content labeled with the Sexual Themes label isn’t displayed on the home page, this content is displayed within the category browse directories, and we recognize that many users frequent these pages to find content on Twitch.”

The company is also working on a feature that would allow streamers to blur thumbnails for content tagged for Sexual Themes, in addition to user settings that would allow viewers to filter content labeled with mature tags that might include sexual themes, tobacco or alcohol use, violence or explicit language.

Twitch has reworked its content policies regarding nudity and sexual themes multiple times in the past month. In a policy overhaul in December, the platform announced that it would allow “fictionalized” nudity featuring nipples, buttocks and genitals, in response to feedback from its art stream community.

Twitch’s new nudity policy allows illustrated nipples, but not human underboob

While illustrated, animated or sculpted depictions of nudity was permitted, VTubers and physical streamers themselves still had to abide by the platform’s attire policy, which forbade exposed breasts and other nudity. The update also streamlined the platform’s stance on sexual content by establishing an all-encompassing “Sexual Themes” label, so that streams tagged with mature labels wouldn’t be promoted on the platform’s homepage.

The platform rolled back the artistic nudity policy days later — the streaming community was fine with lewd furry art, but the influx of hyperrealistic AI-generated nude images raised red flags. In a follow-up blog post, Twitch CEO Dan Clancy wrote that the company went “too far” with the change, and that Twitch agreed with “community concern” regarding the flood of AI-generated nude content.

https://twitter.com/SmallAnt/status/1735379602447712559

“Digital depictions of nudity present a unique challenge — AI can be used to create realistic images, and it can be hard to distinguish between digital art and photography,” Clancy said.

The topless meta went viral late last year when streamer and OnlyFans model Morgpie began appearing naked in streams. Her “topless” streams were framed to show her bare shoulders, upper chest and cleavage. The framing implied nudity, but never actually showed content that explicitly violated Twitch’s sexual content policies. She was banned from Twitch after hosting a topless charity stream that raised funds for Doctors Without Borders.

https://twitter.com/mogrpee/status/1734017844545720321

Other streamers began making similar content, and used black bars, sheets of paper and deliberately placed objects like game controllers to cover themselves. Male streamers also parodied the meta by streaming in the nude, but covering their genitals and nipples. Other creators — particularly male streamers — complained about the popularity of implied nude content. Streamer Gross Gore, who has been previously banned on Twitch for violating its off-platform behavior policy when sexual assault and grooming allegations against him came to light, derided topless meta creators in a recent stream as a danger to children.

Other streamers have been critical of the gendered double standard on Twitch; while all “female-presenting breasts with exposed nipples” are forbidden unless breastfeeding, male streamers are allowed to show their full chests. Twitch affiliate Ren_Nyx pointed out the double standard in an X comment replying to Twitch’s policy update announcement, writing that “it makes no sense that men can be shirtless on stream,” but “if women do it and aren’t even visible it’s somehow a problem.”

Others raised concerns that the new policy would only affect smaller streamers.

“We can only hope that you put your money where your mouth is and actually enforce these new rules toward everyone it applies to — not just small streamers and vtubers,” VTuber MissusMummy replied to Twitch’s X post. “The big named money makers need to know they are not exempt from following the rules.”

Twitch cracks down on boobs again by rolling back its ‘artistic nudity’ policy

pattern of openAI logo

OpenAI changes policy to allow military applications

pattern of openAI logo

Image Credits: Bryce Durbin / TechCrunch

Update: In an additional statement, OpenAI has confirmed that the language was changed in order to accommodate military customers and projects the company approves of.

Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.

Original story follows:

In an unannounced update to its usage policy, OpenAI has opened the door to military applications of its technologies. While the policy previously prohibited use of its products for the purposes of “military and warfare,” that language has now disappeared, and OpenAI did not deny that it was now open to military uses.

The Intercept first noticed the change, which appears to have gone live on January 10.

Unannounced changes to policy wording happen fairly frequently in tech as the products they govern the use of evolve and change, and OpenAI is clearly no different. In fact, the company’s recent announcement that its user-customizable GPTs would be rolling out publicly alongside a vaguely articulated monetization policy likely necessitated some changes.

But the change to the no-military policy can hardly be a consequence of this particular new product. Nor can it credibly be claimed that the exclusion of “military and warfare” is just “clearer” or “more readable,” as a statement from OpenAI regarding the update does. It’s a substantive, consequential change of policy, not a restatement of the same policy.

You can read the current usage policy here, and the old one here. Here are screenshots with the relevant portions highlighted:

Before the policy change. Image Credits: OpenAI
After the policy change. Image Credits: OpenAI

Obviously the whole thing has been rewritten, though whether it’s more readable or not is more a matter of taste than anything. I happen to think a bulleted list of clearly disallowed practices is more readable than the more general guidelines they’ve been replaced with. But the policy writers at OpenAI clearly think otherwise, and if this gives more latitude for them to interpret favorably or disfavorably a practice hitherto outright disallowed, that is simply a pleasant side effect. “Don’t harm others,” the company said in its statement, is “is broad yet easily grasped and relevant in numerous contexts.” More flexible, too.

Though, as OpenAI representative Niko Felix explained, there is still a blanket prohibition on developing and using weapons — you can see that it was originally and separately listed from “military and warfare.” After all, the military does more than make weapons, and weapons are made by others than the military.

And it is precisely where those categories do not overlap that I would speculate OpenAI is examining new business opportunities. Not everything the Defense Department does is strictly warfare-related; as any academic, engineer or politician knows, the military establishment is deeply involved in all kinds of basic research, investment, small business funds and infrastructure support.

OpenAI’s GPT platforms could be of great use to, say, army engineers looking to summarize decades of documentation of a region’s water infrastructure. It’s a genuine conundrum at many companies how to define and navigate their relationship with government and military money. Google’s “Project Maven” famously took one step too far, though few seemed to be as bothered by the multibillion-dollar JEDI cloud contract. It might be OK for an academic researcher on an Air Force Research lab grant to use GPT-4, but not a researcher inside the AFRL working on the same project. Where do you draw the line? Even a strict “no military” policy has to stop after a few removes.

That said, the total removal of “military and warfare” from OpenAI’s prohibited uses suggests that the company is, at the very least, open to serving military customers. I asked the company to confirm or deny that this was the case, warning them that the language of the new policy made it clear that anything but a denial would be interpreted as a confirmation.

As of this writing they have not responded. I will update this post if I hear back.

Update: OpenAI offered the same statement given to The Intercept, and did not dispute that it is open to military applications and customers.

Twitch Coin warp

Twitch attire policy update shuts down the viral topless meta

Twitch Coin warp

Image Credits: Bryce Durbin / TechCrunch

Twitch is effectively banning the “topless meta” and other implied nudity streams with another update to its attire policy.

Under the new policy, announced on Wednesday, streamers are no longer permitted to “imply or suggest that they are fully or partially nude,” and may not show a visible outline of their genitals, even if they’re covered. Covering breasts or genitals with objects or censor bars to suggest nudity is also prohibited. Female-presenting streamers may show cleavage, as long as their nipples and underbust are covered, and “it is clear that the streamer is wearing clothing.”

The update is in response to the rise of popular streams known as topless or “black bar” meta, in which streamers appeared naked by using clever framing or black censor bars to cover their breasts and genitals. Although the content didn’t technically violate Twitch’s attire policy forbidding actual nudity, and was properly tagged for “Sexual Themes,” the streams were still controversial in the Twitch community.

https://twitter.com/payowow/status/1735338521022333359

“For many users, the thumbnails of this content can be disruptive to their experience on Twitch.” Twitch’s Chief Customer Trust Officer Angela Hession wrote in a blog post about the update. “While content labeled with the Sexual Themes label isn’t displayed on the home page, this content is displayed within the category browse directories, and we recognize that many users frequent these pages to find content on Twitch.”

The company is also working on a feature that would allow streamers to blur thumbnails for content tagged for Sexual Themes, in addition to user settings that would allow viewers to filter content labeled with mature tags that might include sexual themes, tobacco or alcohol use, violence or explicit language.

Twitch has reworked its content policies regarding nudity and sexual themes multiple times in the past month. In a policy overhaul in December, the platform announced that it would allow “fictionalized” nudity featuring nipples, buttocks and genitals, in response to feedback from its art stream community.

Twitch’s new nudity policy allows illustrated nipples, but not human underboob

While illustrated, animated or sculpted depictions of nudity was permitted, VTubers and physical streamers themselves still had to abide by the platform’s attire policy, which forbade exposed breasts and other nudity. The update also streamlined the platform’s stance on sexual content by establishing an all-encompassing “Sexual Themes” label, so that streams tagged with mature labels wouldn’t be promoted on the platform’s homepage.

The platform rolled back the artistic nudity policy days later — the streaming community was fine with lewd furry art, but the influx of hyperrealistic AI-generated nude images raised red flags. In a follow-up blog post, Twitch CEO Dan Clancy wrote that the company went “too far” with the change, and that Twitch agreed with “community concern” regarding the flood of AI-generated nude content.

https://twitter.com/SmallAnt/status/1735379602447712559

“Digital depictions of nudity present a unique challenge — AI can be used to create realistic images, and it can be hard to distinguish between digital art and photography,” Clancy said.

The topless meta went viral late last year when streamer and OnlyFans model Morgpie began appearing naked in streams. Her “topless” streams were framed to show her bare shoulders, upper chest and cleavage. The framing implied nudity, but never actually showed content that explicitly violated Twitch’s sexual content policies. She was banned from Twitch after hosting a topless charity stream that raised funds for Doctors Without Borders.

https://twitter.com/mogrpee/status/1734017844545720321

Other streamers began making similar content, and used black bars, sheets of paper and deliberately placed objects like game controllers to cover themselves. Male streamers also parodied the meta by streaming in the nude, but covering their genitals and nipples. Other creators — particularly male streamers — complained about the popularity of implied nude content. Streamer Gross Gore, who has been previously banned on Twitch for violating its off-platform behavior policy when sexual assault and grooming allegations against him came to light, derided topless meta creators in a recent stream as a danger to children.

Other streamers have been critical of the gendered double standard on Twitch; while all “female-presenting breasts with exposed nipples” are forbidden unless breastfeeding, male streamers are allowed to show their full chests. Twitch affiliate Ren_Nyx pointed out the double standard in an X comment replying to Twitch’s policy update announcement, writing that “it makes no sense that men can be shirtless on stream,” but “if women do it and aren’t even visible it’s somehow a problem.”

Others raised concerns that the new policy would only affect smaller streamers.

“We can only hope that you put your money where your mouth is and actually enforce these new rules toward everyone it applies to — not just small streamers and vtubers,” VTuber MissusMummy replied to Twitch’s X post. “The big named money makers need to know they are not exempt from following the rules.”

Twitch cracks down on boobs again by rolling back its ‘artistic nudity’ policy

pattern of openAI logo

OpenAI changes policy to allow military applications

pattern of openAI logo

Image Credits: Bryce Durbin / TechCrunch

Update: In an additional statement, OpenAI has confirmed that the language was changed in order to accommodate military customers and projects the company approves of.

Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.

Original story follows:

In an unannounced update to its usage policy, OpenAI has opened the door to military applications of its technologies. While the policy previously prohibited use of its products for the purposes of “military and warfare,” that language has now disappeared, and OpenAI did not deny that it was now open to military uses.

The Intercept first noticed the change, which appears to have gone live on January 10.

Unannounced changes to policy wording happen fairly frequently in tech as the products they govern the use of evolve and change, and OpenAI is clearly no different. In fact, the company’s recent announcement that its user-customizable GPTs would be rolling out publicly alongside a vaguely articulated monetization policy likely necessitated some changes.

But the change to the no-military policy can hardly be a consequence of this particular new product. Nor can it credibly be claimed that the exclusion of “military and warfare” is just “clearer” or “more readable,” as a statement from OpenAI regarding the update does. It’s a substantive, consequential change of policy, not a restatement of the same policy.

You can read the current usage policy here, and the old one here. Here are screenshots with the relevant portions highlighted:

Before the policy change. Image Credits: OpenAI
After the policy change. Image Credits: OpenAI

Obviously the whole thing has been rewritten, though whether it’s more readable or not is more a matter of taste than anything. I happen to think a bulleted list of clearly disallowed practices is more readable than the more general guidelines they’ve been replaced with. But the policy writers at OpenAI clearly think otherwise, and if this gives more latitude for them to interpret favorably or disfavorably a practice hitherto outright disallowed, that is simply a pleasant side effect. “Don’t harm others,” the company said in its statement, is “is broad yet easily grasped and relevant in numerous contexts.” More flexible, too.

Though, as OpenAI representative Niko Felix explained, there is still a blanket prohibition on developing and using weapons — you can see that it was originally and separately listed from “military and warfare.” After all, the military does more than make weapons, and weapons are made by others than the military.

And it is precisely where those categories do not overlap that I would speculate OpenAI is examining new business opportunities. Not everything the Defense Department does is strictly warfare-related; as any academic, engineer or politician knows, the military establishment is deeply involved in all kinds of basic research, investment, small business funds and infrastructure support.

OpenAI’s GPT platforms could be of great use to, say, army engineers looking to summarize decades of documentation of a region’s water infrastructure. It’s a genuine conundrum at many companies how to define and navigate their relationship with government and military money. Google’s “Project Maven” famously took one step too far, though few seemed to be as bothered by the multibillion-dollar JEDI cloud contract. It might be OK for an academic researcher on an Air Force Research lab grant to use GPT-4, but not a researcher inside the AFRL working on the same project. Where do you draw the line? Even a strict “no military” policy has to stop after a few removes.

That said, the total removal of “military and warfare” from OpenAI’s prohibited uses suggests that the company is, at the very least, open to serving military customers. I asked the company to confirm or deny that this was the case, warning them that the language of the new policy made it clear that anything but a denial would be interpreted as a confirmation.

As of this writing they have not responded. I will update this post if I hear back.

Update: OpenAI offered the same statement given to The Intercept, and did not dispute that it is open to military applications and customers.

Critical 2024 AI policy blueprint: Unlocking potential and safeguarding against workplace risks

Futuristic display forming by a glowing particle on black background.

Image Credits: Yuichiro Chino / Getty Images

Richard Marcus

Contributor

Richard Marcus is the head of information security at AuditBoard.

Many have described 2023 as the year of AI, and the term made several “word of the year” lists. While it has positively impacted productivity and efficiency in the workplace, AI has also presented a number of emerging risks for businesses.

For example, a recent Harris Poll survey commissioned by AuditBoard revealed that roughly half of employed Americans (51%) currently use AI-powered tools for work, undoubtedly driven by ChatGPT and other generative AI solutions. At the same time, however, nearly half (48%) said they enter company data into AI tools not supplied by their business to aid them in their work.

This rapid integration of generative AI tools at work presents ethical, legal, privacy, and practical challenges, creating a need for businesses to implement new and robust policies surrounding generative AI tools. As it stands, most have yet to do so — a recent Gartner survey revealed that more than half of organizations lack an internal policy on generative AI, and the Harris Poll found that just 37% of employed Americans have a formal policy regarding the use of non-company-supplied AI-powered tools.

While it may sound like a daunting task, developing a set of policies and standards now can save organizations from major headaches down the road.

AI use and governance: Risks and challenges

Generative AI’s rapid adoption has made keeping pace with AI risk management and governance difficult for businesses, and there is a distinct disconnect between adoption and formal policies. The previously mentioned Harris Poll found that 64% perceive AI tool usage as safe, indicating that many workers and organizations could be overlooking risks.

These risks and challenges can vary, but three of the most common include:

Overconfidence. The Dunning–Kruger effect is a bias that occurs when our own knowledge or abilities are overestimated. We’ve seen this manifest itself relative to AI usage; many overestimate the capabilities of AI without understanding its limitations. This could produce relatively harmless results, such as providing incomplete or inaccurate output, but it could also lead to much more serious situations, such as output that violates legal usage restrictions or creates intellectual property risk.Security and privacy. AI needs access to large amounts of data for full effectiveness, but this sometimes includes personal data or other sensitive information. There are inherent risks that come along with using unvetted AI tools, so organizations must ensure they’re using tools that meet their data security standards.Data sharing. Just about every technology vendor has launched or will soon launch AI capabilities to augment their core product offerings, and many of these additions are self-service or user-enabled. Free-to-use solutions often operate by monetizing user-provided data, and in these cases, there is one thing to remember: If you are not paying for the product, you likely are the product. Organizations should take care to ensure the learning models they are using are not trained with personal or third-party data without consent and that their own data is not used to train learning models without permission.

There are also risks and challenges associated with developing products that include AI capabilities, such as defining the acceptable use of customer data for model training. As AI infiltrates every facet of business, these and many other considerations are bound to follow.

Developing comprehensive AI usage policies

Integrating AI into business processes and strategies has become imperative, but it requires developing a framework of policies and guidelines for responsible deployment and use. How this looks may vary based on an organization’s specific needs and use cases, but four overarching pillars can help organizations leverage AI for innovation while mitigating risks and upholding ethical standards.

Integrating AI into strategic organizational plans

Embracing AI requires aligning its deployment with the strategic objectives of the business. It’s not about adopting cutting-edge technology for technology’s sake; integrating AI applications that resonate with the organization’s defined mission and objectives should enhance operational efficiencies and drive growth.

Mitigating overconfidence

Acknowledging the potential of AI should not equate to unwavering trust. Cautious optimism (with an emphasis on “cautious”) should always prevail, as organizations need to account for the limitations and potential biases of AI tools. Finding a calculated balance between leveraging AI’s strengths and remaining aware of its current and future constraints is pivotal.

Defining guidelines and best practices in AI tool usage

Defining protocols for data privacy, security measures, and ethical considerations ensures consistent and ethical utilization across all departments. This process includes:

Involving diverse teams in policy creation: Teams including legal, HR, and information security should participate to create a holistic perspective, integrating both legal and ethical dimensions into operational frameworks.Defining parameters on usage and restricting harmful applications: Articulate policies for AI usage in practical and technology applications, identify areas where AI can be employed beneficially, and prevent the use of potentially harmful applications while setting processes to evaluating new AI use cases that may align with the business’s strategic interests.Performing regular policy updates and employee education: AI evolves continuously, and this evolution may only accelerate — policy frameworks need to adapt in tandem. Regular updates ensure that policies align with the quickly changing AI landscape, and comprehensive employee education ensures compliance and responsible use.

Implementing monitoring and detection for unauthorized AI use

Deploying strong endpoint or SASE/CASB-based detections and data loss prevention (DLP) mechanisms plays a huge role in identifying unauthorized AI usage within the organization and mitigating potential breaches or misuse. Scanning for intellectual property within open source AI models is also crucial. Meticulous inspection safeguards proprietary information and prevents unintended (and costly) infringements.

As businesses delve deeper into AI integration, formulating clear yet extensive policies enables them to harness the potential of AI while also mitigating its risks.

Effective policy design also fosters ethical AI usage and creates organizational resilience in a world that will only become more AI-driven. Make no mistake: This is an urgent matter. Organizations that embrace AI with well-defined policies will give themselves the best opportunity to effectively navigate this transformation while also upholding ethical standards and achieving their strategic goals.