c# - Fast parallel adding to dictionary when keys never collide -


i have situation, dictionary created, key-value pairs added, , after point dictionary used reading values.

i'm trying implement quickest way of adding dictionary in initialization phase.

concurrentdictionary has slow tryadd method (and getoradd) - on 6 core cpu (12 threads), cpu usage stays @ 25%, indicating 3 threads used.

it faster add keys (around 25 million) dictionary sequentially, using concurrentdictionary , parallel.for.

how can improve speed here? easy separate key-value pairs, keys never collide when added dictionary, using code below creates problems:

dictionary<long,string> d = new dictionary<long,string>(); d[key] = value; 

it seems this, when used in multi-threaded environment, fails because dictionary internal changes (resizing?).

would work instead?

capacity = 250000000 //basically big enough store data dictionary<long,string> d = new dictionary<long,string>(capacity); d[key] = value; 

i prefer using dictionary on concurrentdictionary, because reads faster (and speed crucial application).

if know maximum size of dictionary, yes can improve speed pre-allocating. however, won't prevent concurrent adds potentially messing dictionary. internal lists updated, , it's quite possible 2 threads end storing key @ same index in list.

have tried obvious thing? is:

lock (dictionarylock) {     dict[key] = value; } 

if lock not contended, it's going take perhaps 20 nanoseconds. if is contended, delay longer, you're doing short operation. whether faster depend lot on kind of processing done between calls add.


Comments

Popular posts from this blog

google api - Incomplete response from Gmail API threads.list -

Installing Android SQLite Asset Helper -

Qt Creator - Searching files with Locator including folder -