Go home now Header Background Image
Search
Submission Procedure
share: |
 
Follow us
 
 
 
 
Volume 6 / Issue 10

available in:   PDF (779 kB) PS (186 kB)
 
get:  
Similar Docs BibTeX   Write a comment
  
get:  
Links into Future
 
DOI:   10.3217/jucs-006-10-0968

 

Compiler Generated Multithreading to Alleviate Memory Latency

Kristof E. Beyls (Dept. of Electronics and Information Systems University of Ghent, Belgium)

Erik H. D'Hollander (Dept. of Electronics and Information Systems University of Ghent, Belgium)

Abstract: Since the era of vector and pipelined computing, the computational speed is limited by the memory access time. Faster caches and more cache levels are used to bridge the growing gap between the memory and processor speeds. With the advent of multithreaded processors, it becomes feasible to concurrently fetch data and compute in two cooperating threads. A technique is presented to generate these threads at compile time, taking into account the characteristics of both the program and the underlying architecture. The results have been evaluated for an explicitly parallel processor. With a number of common programs the data-fetch thread allows to continue the computation without cache miss stalls.

Keywords: cache optimization, compiler optimization, data locality, multithreading, prefetching, run-time data relocation, tiling