Call : (+91) 99 8080 3767
Mail : info@EncartaLabs.com
EncartaLabs

Java Concurrency & Performance

( Duration: 5 Days )

The Java Concurrency & Performance Tuning training course covers every aspect of concurrent programming in Java relevant to practicing Java programmers - from the fundamental problem of race conditions to the principles of lock-free programming.

By attending Java Concurrency & Performance Tuning workshop, delegates will learn to:

  • Understand concurrency control issues in general
  • Know the instruments available in Java
  • Avoid common errors and pitfalls
  • Understand concurrency control idioms

  • Basic knowledge of Java

COURSE AGENDA

1

Producer Consumer(Basic Hand-Off)

  • Why wait-notify require Synchronization
    • locking handling done by OS
    • Hidden queue
    • Structural modification to hidden queue by wait-notify
    • use cases for notify-notifyAll
    • notifyAll used as work around
    • design issues with synchronization
2

Common Issues with thread

  • problem with stop
  • Dealing with InterruptedStatus
  • Uncaught Exception Handler
3

Java Memory Model(JMM)

  • Sequential Consistency would disallow common optimizations
  • Instruction Reordering
    • heavily pipelines processors
    • super-scalar processors
  • Cache Coherency
    • NUMA(Non uniform memory access)
  • Real Meaning and effect of synchronization
  • Volatile
  • Final
  • The changes in JMM
4

Applied Threading techniques

  • Thread Local Storage
  • Safe Construction techniques
  • UnSafe Construction techniques
  • Thread safety levels
5

Building Blocks for Highly Concurrent Design

  • CAS
    • Hardware based locking
    • Optimistic Design
    • ABA problem
      • Markable reference
      • Stamped reference
      • weakCompareAndSet
    • Wait-free Stack implementation
    • Wait-free Queue implementation
  • Lock Implementation
    • Design issues with synchronization
    • Multiple user conditions and wait queues
    • Lock Polling techniques
    • Reentrant Lock
      • ReentrantReadWriteLock
      • ReentrantLock
    • Based on CAS
  • Lock Striping
    • Lock Striping on table
    • Lock Striping on LinkNodes
  • Indentifying scalability bottlenecks in java.util.Collection
    • segregating them based on Thread safety levels
6

Highly Concurrent Data Structures - Part1

  • ConcurrentHashMap
    • Structure
    • Almost immutability
    • Using volatile to detect interference
    • Read does not block in common code path
    • remove/put/resize lock
    • Weakly Consistent Iterators vs Fail Fast Iterators
  • LockFreeHashMap
    • For systems with more than 100 cpus/cores
    • Constant Time key-value mapping
    • no locks even during resize
    • all CAS spin loop bounded
    • faster than ConcurrentHashMap
    • State based Reasoning
7

Designing For Concurrency

  • Confinement
  • Immutability
  • Almost Immutability
  • Atomicity
  • Visibility
  • Restructuring and refactoring
8

Canned Synchronizers

  • Synchronous Queue Framework
  • Future
  • Semaphore
  • Mutex
  • Barrier
  • Latches
  • SynchronousQueue
  • Exchanger
9

Highly Concurrent Data Structures - Part2

  • CopyOnWriteArray(List/Set)
  • Queue interfaces
    • Queue
    • BlockingQueue
    • Deque
    • BlockingDeque
  • Queue Implementations
    • ConcurrentLinkedQueue
    • LinkedBlockingQueue and LinkedBlockingDeque
    • ArrayBlockingDeque
    • ArrayDeque and ArrayBlockingDeque
      • WorkStealing using Deques
      • LinkedTransferQueue
  • Skiplists
    • ConcurrentSkipList(Map/Set)
      • Sequential Skiplist
      • Lock based concurrent Skiplist
      • Lock free concurrent Skiplist
      • Concurrent Skiplist
    • Executor FrameWork
      • Configuration
    • Fork and Join Framework
      • Hardware shapes programming idiom
      • Exposing fine grained parallelism
      • Divide and conquer
      • Fork and Join
      • Anatomy of Fork and Join
      • Work Stealing
      • Fork -join decomposition
      • ParallelArray
      • Limitations
10

Crash course in Modern hardware

  • Amdahl’s Law
  • Cache
    • Direct mapped
    • Address mapping in cache
    • read
    • write
    • cache controller
  • Memory Architectures
    • UMA
    • NUMA
11

Concurrent Reasoning

  • Sequential Consistency
  • Linearizability
  • Quiescent Consistency
  • Compositionality
12

Concurrency Patterns

  • Fine grained Synchronization
  • Optimistic Synchronization
  • Lazy Synchronization
  • Lock free Synchronization
13

Designing for multi-core/processor environment

  • Harsh Realities of parallelism
  • Parallel Programming
  • Concurrent Objects
    • Concurrency and Correctness
    • Quiescent Consistency
    • Sequential Consistency
    • Linearizability
    • Progress Conditions
  • Spinlocks
    • Lock suitable for NUMA systems
  • Lists
    • Coarse Grained Synchronization
    • Fine Grained Synchronization
    • Optimistic Synchronization
    • Lazy Synchronization
    • Non Blocking Synchronization
  • Concurrent Queues
    • Bounded Partial Queue
    • Unbounded Total Queue
    • Unbounded lock-free Queue
  • Concurrent Stack
    • Concurrent Hashing
    • Closed Address Hashing
    • Open Address Hashing
    • Lock Free Hashing
  • Skiplist
    • Sequential Skiplist
    • Lock based Concurrent Skiplist
    • Lock free Skiplist
  • Priority Queues
    • Array Based bounded Priority Queue
    • Tree based Bounded Priority Queue
    • Heap Based Unbounded Priority Queue
    • Skiplist based Unbounded priority Queue

Encarta Labs Advantage

  • One Stop Corporate Training Solution Providers for over 6,000 various courses on a variety of subjects
  • All courses are delivered by Industry Veterans
  • Get jumpstarted from newbie to production ready in a matter of few days
  • Trained more than 50,000 Corporate executives across the Globe
  • All our trainings are conducted in workshop mode with more focus on hands-on sessions

View our other course offerings by visiting http://encartalabs.com/course-catalogue-all.php

Contact us for delivering this course as a public/open-house workshop/online training for a group of 10+ candidates.

Top