-
Notifications
You must be signed in to change notification settings - Fork 7
Description
I used the Banshee code and was able to run it smoothly according to the test.cfg file. In the memory configuration, I noticed that the cache_scheme options include AlloyCache, HybridCache (Banshee), UnisonCache, and Tagless. However, in mc.cpp, I also found that it is possible to set the NoCache and CacheOnly modes. When I selected the NoCache mode, I found that for the same workload, its simulation time and CPU IPC outperformed the performance of other modes such as AlloyCache.
This seems to contradict common sense, as for lots of workloads, the lack of a cache should result in worse performance. Below is the configuration code.
mem = {
enableTrace = false;
mapGranu = 64;
controllers = 2;
type = "DramCache";
# cache_scheme: AlloyCache, HybridCache (Banshee), UnisonCache, Tagless
cache_scheme = "NoCache";
ext_dram = {
type = "DDR";
ranksPerChannel = 4;
banksPerRank = 8;
};
mcdram = {
ranksPerChannel = 8;
banksPerRank = 8;
cache_granularity = 64;
size = 512;
mcdramPerMC = 8;
num_ways = 1;
sampleRate = 1.0;
# placementPolicy: LRU, FBR
placementPolicy = "LRU";
type = "DDR";
};
};Could it be that in mc.cpp, the implementations of NoCache and CacheOnly are simpler, while the other schemes are more complex, giving them a natural disadvantage during simulation compared to NoCache and CacheOnly?