AWS Summit 2019 - Session 1: Cost Savings in AWS

TCO

Initial questions

  1. Capacity planning - how do you plan
  2. Utilization - what is your average
  3. Operations - power etc, but that's why we have AWS
  4. Optimization - AWS/cost optimized

Cost Optmization Framework

AWS Well-Architected framework - https://aws.amazon.com/architecture/well-architected/

  • build/deploy faster
  • lower/mitigate risk
  • make informed deciscions
  • AWS best practices

Stop guessing capacity needs - measures
Automate to allow arch expermientation
Test systems at production scale
Allow for evolutionary architectures
drive via data (metrics etc)
Improve through Game Days
- wargaming your environment

Cost optimization

Adopt a consumption model
Measure your overall effeciency
Move away from datacenter operations
this is maybe most relevant, how to have less static servers and more on demand function
anayle and attribute expenditure per department
managed service for TCO

  • Managed services
    • takes over "turning the lights on"
    • actual product apparently, not sure where
    • Ex. EC2 (unmanaged to highly managed) (pay for infrastructure to pay for transaction) (further ex of RDS: )
      • Normal
      • AWS Elastic Beanstalk - autoscaling
      • ECS and EK - container based deployment scaling
      • Fargate - pure containers
      • Lambda - pure functionality
  • Appropriate provision
    • choose the right architecture to meet business need
      • steady state v burst
      • consolidated v separated workloads
      • cost v performance - what needs to get done quick
    • turnaround on adjusting scaling
    • Serverless
      • no idle capacity
      • Met office in AWS video for weather prediction
      • cost of DB v S3 for record storage -
      • cost of server v serverless for different workloads -
  • right-sizing
    • iterate by adjustment
    • monitor resources and alarms
    • monitor end-user experience
    • consider different instance families
    • LOOK INTO CLOUDWATCH METRICS
    • right-size, THEN reserve
    • elasticity - more smaller instances v fewer larger instances
      • watch use curve -
    • serverless
      • memory allocated is linked to CPU, more is more
      • look at completion time v CPU/cost - there is a tool out there to run this test (identify if function is memory-bound or CPU-bound)
    • S3
      • RR no longer recommended
      • Glacier Deep Archive available
      • One Zone - IA is cheaper than RR #todo
      • Glaciers are "nearline"
      • IA TIERS CHARGE ALL FILES AS 128KB!!
        • technically one object can be multiple files, then you can retrieve parts of files, but ugh
      • if retrieving more than once per month, use standard, if less than once per month, IA is fine
      • !Intelligent Tiering is automaticly lifecycle (management 0.0025 per 1k objects per month) #todo
  • purchasing options
    • on-demand: EC2, S3
    • provisioned: DynamoDB, Kinesis Data Streams
    • Reserved: EC2, Elasticsearch
    • Spot instances - good for app orchestration, can fail at any time so part of cluster
  • optmised data transfer EC2 Auto-scaling - no cost, may be really good (question about reboots) - https://aws.amazon.com/ec2/autoscaling/faqs/ AWS Autoscaling - separate thing, over many services of AWS Demand-based v Time-based - time-based could be really great, but we need to look how things run

Tags

  • Tagging used in EC2 scheduler and other policy automation tools
  • tags are case sensitive
  • Cost Allocation Tagging allows billing reports
    • knowing where things are spent
    • AWS Cost Explorer v CloudHealth
      • Reserved instance recommendations would be good
    • AWS Budgets with actual alerting
    • AWS Trusted Advisor for Business Support etc #todo

What Next


OPTIMIZE STRUCTURE OVER TIME
We need to get our own AWS account out from under G2 to see billing more accurately

More from Summerlin
All posts