Please enable JavaScript.
Coggle requires JavaScript to display documents.
11.1 Thinking About Performance - Coggle Diagram
11.1 Thinking About Performance
Definition of Performance
Doing more work with fewer resources
Resources can include:
CPU cycles
Memory
Network bandwidth
I/O bandwidth
Disk space
Database requests
Resource Constraints
Performance bottlenecks occur when activity is bound by a resource:
CPU-bound
I/O-bound
Memory-bound
Database-bound
Overhead of Using Threads
Costs introduced:
Locking
Signaling
Memory synchronization
Context switching
Thread lifecycle (creation/teardown)
Scheduling
Tradeoff:
If well-designed: ↑ Throughput / Responsiveness / Capacity
If poorly designed: ↓ Performance vs. sequential version
Goals of Concurrency for Performance
Utilize current processing resources better
Exploit additional resources if available
Performance goal: keep CPUs busy with useful work
Not just “busy” (avoid useless computation)
Use decomposition to keep processors active
Performance vs. Scalability
Definitions:
Performance = "How fast?"
Service time
Latency
Scalability = "How much?"
Throughput
Capacity
Key Concepts:
Scalability improves with more resources
Scalability requires parallelization
Performance & scalability can conflict:
Pipelining improves scalability but may increase service time
Tricks for sequential optimization may hurt scalability
Practical Considerations:
Monolithic systems hard to scale
Accept some performance cost to gain scalability
For server applications: Scalability > Speed
For interactive apps: Latency (speed) is key
Evaluating Performance Trade-offs
Engineering trade-offs are inevitable
Avoid premature optimization
“Make it right, then fast (if needed)”
Common Tradeoffs:
Service time vs. memory usage
Performance vs. safety
Performance vs. readability/maintainability
Performance vs. object-oriented design (e.g., encapsulation)
Decision-making Questions
Ask before optimizing:
What does “faster” mean in this case?
Under what load/data conditions is this true?
Can you measure it?
How often do these conditions occur?
Will this code run in different environments?
What are the hidden costs (e.g., maintenance risk)?
Performance in Concurrency
Performance tuning is risky in concurrent systems
Optimization often introduces concurrency bugs
E.g., double-checked locking
Don’t trade safety for uncertain performance gains
Optimization Best Practices
Have clear performance goals
Use measurement tools
Avoid guessing — measure performance
Use realistic configs/load profiles
Always verify improvements after tuning
Tools for Performance Monitoring
Use profiling tools
Example: perfbar (free tool)
Shows CPU utilization
Helps determine if tuning is needed or working