
AWS Partner network launches new AWS Automotive Competency
AWS has introduced the AWS Automotive Competency program, recognizing APN Partners who excel in the automotive journey. AWS Automotive Competency Partners have shown their AWS competency across at least one of eight automotive strategic workloads:
1. Autonomous Mobility (AM)
2. Software Defined Vehicle (SDV)
3. Connected Mobility
4. Sustainability
5. Digital Customer Experience (DCE)
6. Product Engineering
7. Manufacturing
8. Supply Chain
AWS Automotive Competency Partners offer advanced solutions and services that enable customers to bring innovative products to market efficiently. They serve a wide range of clients, from AutoTech startups to major Automakers, with a global presence. These partners undergo rigorous technical evaluation to ensure a consistent, high-quality customer experience. If you're an APN Partner experienced in working with automotive clients on AWS, you can explore this opportunity further.
To learn more click here
Amazon CloudWatch Logs announces regular expression filter pattern syntax support
Read more by clicking here.
AWS WAF Bot Control now protects against distributed proxy-based attacks
AWS WAF Bot Control for Targeted Bots has introduced predictive Machine Learning (ML) technology to defend against distributed, proxy-based attacks. It builds upon the existing protection against sophisticated evasive bots and now extends to blocking, challenging, and Captcha rules for distributed bot attacks, like those using residential and mobile proxies. Threat actors often exploit compromised residential computers to create bots and use residential proxies to avoid detection. They frequently rotate IP addresses to evade rate limiting rules and source traffic from global proxies.
AWS WAF Bot Control now employs managed ML to combat these threats automatically, providing bot confidence levels to help customize enforcement actions. This feature is available to all users at no additional cost, with standard charges applying. For more information, consult the documentation.
To learn more click here.
Amazon has introduced new EC2 R7iz instaces, M6id and R6id database instances
Know more here
To learn more about incremental export to S3, see Data Exports and the Incremental export from Amazon DynamoDB to Amazon S3 blog. For information about pricing and regional availability, see Amazon DynamoDB pricing.
AWS Compute Optimizer has expanded its support to include 11 new types of Amazon EC2 instances, including accelerated computing ones like G4dn and P3. These new recommendations are designed to help customers improve the efficiency of their Machine Learning, High-performance computing, and graphic-intensive tasks. Now, customers can use these recommendations to find the right accelerated computing options, instance types, GPU numbers, and storage settings for their existing resources. They can make their underperforming workloads run better and reduce costs for workloads that have too many resources.
Compute Optimizer will also provide suggestions for supported accelerated computing instances that have the AWS CloudWatch agent with NVIDIA driver for extra data. You can explore further details about AWS Compute Optimizer by visiting the site.
Link Foundation Models to Your Company's Data Sources Using Amazon Bedrock Agents
In July, AWS unveiled a preview of Amazon Bedrock's agents, a feature that empowers developers to build AI applications that can complete tasks. Now they have introduced a new feature that enables secure connections between foundation models (FMs) and your company's data sources using agents.
This feature allows agents to provide FMs in Bedrock with access to extra data, helping the model generate more accurate and context-aware responses without the need for constant retraining. Agents, based on user input, pinpoint the relevant knowledge base, fetch the necessary information, and incorporate it into the input, providing the model with more context to generate more informed responses.
Agents in Amazon Bedrock use something called 'retrieval augmented generation' (RAG) to make this happen. To set up a knowledge base, you'll need to specify where your data is stored in Amazon S3, choose an embedding model, and give the information about your vector database. Bedrock will transform your data into embeddings and save them in the vector database. Afterward, you can link the knowledge base to agents to enable RAG workflows.
To learn more about it, visit the documentation
Introducing Amazon EC2 M2 Pro Mac Instances Powered by Apple Silicon M2 Pro Mac Mini
AWS announces the availability of Amazon EC2 M2 Pro Mac instances. These new instances are designed to be faster, offering a 35 percent performance boost compared to the previous M1 Mac instances when you're working on Apple platform applications.

The EC2 M2 Pro Mac instances are powered by Apple's M2 Pro Mac Mini computers, which come with a 12-core CPU, a 19-core GPU, 32 GB of memory, and a 16-core Apple Neural Engine. What's unique is that these Mac Minis are integrated into the AWS Nitro System through high-speed Thunderbolt connections. This makes them fully managed compute instances with fast network and storage capabilities, providing up to 10 Gbps of Amazon VPC network bandwidth and up to 8 Gbps of Amazon EBS storage bandwidth. You can use these instances with macOS Ventura (version 13.2 or later) as AMIs for your projects.
For details please, Click here
Amazon MSK Unveils Streamlined Data Delivery from Apache Kafka to your Data Lake

Kafka is used to build real-time data pipelines that move large volumes of data between different systems or applications. It offers a highly scalable and fault-tolerant messaging system for publishing and subscribing to data. Many AWS customers have adopted Kafka to capture streaming data like click-stream events, transactions, IoT events, and logs from applications and machines. They use this data for real-time analytics, continuous transformations, and immediate distribution to data lakes and databases.
Click here for more information.
AWS Backup introduces the capability to provide continuous backup support for Amazon Aurora
AWS Backup takes care of keeping your data safe across different AWS services and even in mixed environments with both cloud and on-premises systems. AWS Backup is making it easier for customers who use Amazon Aurora databases to protect their data. They've added a feature that lets users restore their database to a specific point, as long as it's within the last 35 days.
Now, if you use Amazon Aurora and AWS Backup together, you can go into the AWS Backup console or use API or CLI to bring your database back to a specific moment in time. This new feature is available in all the places where AWS Backup and Amazon Aurora are offered, so you can start using it right away.
For additional information, you can click the link