RobertBerger

tinymembench riscv 1

Jul 1st, 2021 (edited)
263
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.42 KB | None | 0 0
  1. tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
  2.  
  3. ==========================================================================
  4. == Memory bandwidth tests ==
  5. == ==
  6. == Note 1: 1MB = 1000000 bytes ==
  7. == Note 2: Results for 'copy' tests show how many bytes can be ==
  8. == copied per second (adding together read and writen ==
  9. == bytes would have provided twice higher numbers) ==
  10. == Note 3: 2-pass copy means that we are using a small temporary buffer ==
  11. == to first fetch data into it, and only then write it to the ==
  12. == destination (source -> L1 cache, L1 cache -> destination) ==
  13. == Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
  14. == brackets ==
  15. ==========================================================================
  16.  
  17. C copy backwards : 121.9 MB/s
  18. C copy backwards (32 byte blocks) : 121.8 MB/s
  19. C copy backwards (64 byte blocks) : 121.8 MB/s
  20. C copy : 121.8 MB/s
  21. C copy prefetched (32 bytes step) : 120.7 MB/s (0.3%)
  22. C copy prefetched (64 bytes step) : 121.8 MB/s
  23. C 2-pass copy : 112.6 MB/s
  24. C 2-pass copy prefetched (32 bytes step) : 111.6 MB/s
  25. C 2-pass copy prefetched (64 bytes step) : 112.6 MB/s
  26. C fill : 227.5 MB/s (0.3%)
  27. C fill (shuffle within 16 byte blocks) : 227.5 MB/s
  28. C fill (shuffle within 32 byte blocks) : 227.5 MB/s (0.3%)
  29. C fill (shuffle within 64 byte blocks) : 227.4 MB/s
  30. ---
  31. standard memcpy : 117.2 MB/s
  32. standard memset : 227.3 MB/s
  33.  
  34. ==========================================================================
  35. == Memory latency test ==
  36. == ==
  37. == Average time is measured for random memory accesses in the buffers ==
  38. == of different sizes. The larger is the buffer, the more significant ==
  39. == are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
  40. == accesses. For extremely large buffer sizes we are expecting to see ==
  41. == page table walk with several requests to SDRAM for almost every ==
  42. == memory access (though 64MiB is not nearly large enough to experience ==
  43. == this effect to its fullest). ==
  44. == ==
  45. == Note 1: All the numbers are representing extra time, which needs to ==
  46. == be added to L1 cache latency. The cycle timings for L1 cache ==
  47. == latency can be usually found in the processor documentation. ==
  48. == Note 2: Dual random read means that we are simultaneously performing ==
  49. == two independent memory accesses at a time. In the case if ==
  50. == the memory subsystem can't handle multiple outstanding ==
  51. == requests, dual random read has the same timings as two ==
  52. == single reads performed one after another. ==
  53. ==========================================================================
  54.  
  55. block size : single random read / dual random read
  56. 1024 : 0.0 ns / 0.0 ns
  57. 2048 : 0.0 ns / 0.1 ns
  58. 4096 : 0.0 ns / 0.0 ns
  59.  
  60. 8192 : 0.0 ns / 0.0 ns
  61. 16384 : 0.0 ns / 0.0 ns
  62. 32768 : 0.2 ns / 0.2 ns
  63. 65536 : 19.4 ns / 38.6 ns
  64. 131072 : 29.0 ns / 57.9 ns
  65. 262144 : 38.2 ns / 78.4 ns
  66. 524288 : 42.8 ns / 89.1 ns
  67. 1048576 : 55.0 ns / 111.8 ns
  68. 2097152 : 179.5 ns / 359.5 ns
  69. 4194304 : 246.0 ns / 492.1 ns
  70. 8388608 : 284.8 ns / 569.4 ns
  71. 16777216 : 313.3 ns / 626.9 ns
  72. 33554432 : 335.4 ns / 670.9 ns
  73. 67108864 : 359.5 ns / 718.9 ns
  74.  
  75.  
Add Comment
Please, Sign In to add comment