Friday, May 9, 2025
Security Boulevard
The Home of the Security Bloggers Network
Community Chats Webinars Library
Home
Cybersecurity News
Features
Industry Spotlight
News Releases
Security Creators Network
Latest Posts
Syndicate Your Blog
Write for Security Boulevard
Webinars
Upcoming Webinars
Calendar View
On-Demand Webinars
Events
Upcoming Events
On-Demand Events
Sponsored Content
Chat
Security Boulevard Chat
Marketing InSecurity Podcast
Techstrong.tv Podcast
TechstrongTV - Twitch
Library
Related Sites
Techstrong Group
Cloud Native Now
DevOps.com
Security Boulevard
Techstrong Research
Techstrong TV
Techstrong.tv Podcast
Techstrong.tv - Twitch
Devops Chat
DevOps Dozen
DevOps TV
Media Kit
About
Sponsor
Analytics
AppSec
CISO
Cloud
DevOps
GRC
Identity
Incident Response
IoT / ICS
Threats / Breaches
More
Blockchain / Digital Currencies
Careers
Cyberlaw
Mobile
Social Engineering
Humor
Personal Data Auction
Gravy Analytics Breach, Subaru Starlink Vulnerability Exposed
Tom Eston
|
January 27, 2025
|
30 Million Data Points
,
Cyber Security
,
cyber threat
,
Cybersecurity
,
Data Broker
,
Data Privacy
,
Data Regulation
,
Digital Privacy
,
Episodes
,
Gravy Analytics
,
Gravy Analytics Breach
,
Information Security
,
Infosec
,
Location Data Leak
,
Personal Data Auction
,
Podcast
,
Podcasts
,
Privacy
,
Privacy Legislation
,
Real-Time Bidding
,
security
,
Smart Cars Security
,
Subaru
,
Subaru Starlink Vulnerability
,
Subaru Vehicle Controls
,
technology
,
Vehicle Hacking
,
Vulnerability Exploitation
,
Weekly Edition
In this episode, we discuss the latest issues with data brokers, focusing on a breach at Gravy Analytics that leaked 30 million location data points online. We also explore a vulnerability in ...
Shared Security Podcast
×
Security in AI
Step
1
of
7
14%
How would you best describe your organization's current stage of securing the use of generative AI in your applications?
(Required)
We don’t have or use generative AI in our applications
Nothing significant yet - We have or will have generative AI in our applications but haven’t yet researched or explored the possibility of securing it.
Starting - We have begun to work on securing generative AI in our applications.
Developing - We have started to implement some security measures for generative AI.
Average - We have some policies or tools in place.
Advanced - We have some enhanced security protocols specifically tailored for generative AI in our applications.
Very advanced - We have comprehensive, mature, and proactive security solutions for generative AI in our applications.
Have you implemented, or are you planning to implement, zero trust security for the AI your organization uses or develops?
(Required)
Already implemented
Planning to implement
Considering or scoping to implement
Not considering zero trust for AI
Do not use zero trust at all
Don’t know
What are the three biggest challenges your organization faces when integrating generative AI into applications or workflows? (Select up to three)
(Required)
Potential impacts on performance or scalability of the applications
Data security or privacy
Potential impacts on performance or scalability of the AI
Cost to implement
Access security
Lack of expertise or experience
Intellectual property protection
Compliance with standards and regulations
Security of prompts or responses
Integration capabilities of current systems
How does your organization secure proprietary information used in AI training, tuning, or retrieval-augmented generation (RAG)? (Select all that apply)
(Required)
Encryption at rest
Encryption in transit
Data access control, monitoring
Data anonymization
Data/information isolation
Data masking/tokenization
Data governance policies
Zero trust / continuous authentication and authorization
No specific measures in place
Which of the following kinds of tools are you currently using to secure your organization’s use of generative AI? (select all that apply)
(Required)
Native cloud service tools (AWS Guardrails, e.g.)
Third-party tools
Custom in-house tools
No specific tools
How valuable do you think it would it be to have a solution that classifies and quantifies risks associated with generative AI tools?
(Required)
Extremely valuable
Very valuable
Somewhat valuable
Not valuable
What are, or do you think would be, the most important reasons for implementing generative AI security measures? (Select up to three)
(Required)
Regulatory compliance
Protecting intellectual property
Mitigating potential data breaches
Maintaining customer trust
Competitive advantage
Improved risk management
Increased operational efficiency
Don’t know
Δ
×