Based on the work completed by , following is my summary and review on the RowHammerJS & BitFlipping in general. I completed these notes as part of work in University of Oxford and towards GCHQ accreditation. All comments are my own.
A very brief introduction to DRAM, highlighting the main areas concerned in rowHammering.
Each cell has a capacitor and stores binary data based on a charge, a charged cell represents a binary value of 1, while a discharged cell represents a binary value of 0. DRAM is volatile memory and loses its charge over a period of time i.e. the retention rate, this means each cell needs to be frequently refreshed i.e. once every 64ms highlighted by  and defined by , if a cell is not refreshed data will be lost. DRAM implements cache via the row buffer. By-passing the row buffer is central to row hammering. Double sided row hammering as used by  is represented in the above diagram by accessing two aggressor rows either side of a victim row. A disturbance error / bit flip occurs in the victim row in at least one of the cells e.g. from the diagram one of the victim row circles (cells capacitor) value is flipped. Isolation of these cells is paramount to secure systems i.e. when accessing one cell neighboring cells should not be adversely affected, if isolation is vulnerable and the retention rate adversely affected it’s conceivable disturbances may occur, this means if a cell loses its charge prior to a refresh a cell can flip its bit. DRAM has scaled in recent years, this has led to an increase in cell density i.e. cells are packed closer together, as such isolation of cells could be adversely affected, this becomes a central theme for rowHammering.
Repeated (in the millions) access on a row has an effect on neighboring rows, this affect accelerates charge leakage on these nearby rows, as such if this leakage is faster than expected and before the refresh rate a disturbance can occur, i.e. a particular nearby cell loses its charge at an accelerated rate. As  DRAM specifications indicate a refresh rate of once every 64ms, as such if the leakage rate is greater than the expected leakage rate and the cell has not been refreshed within 64ms the cell will lose data.
This repeated access on a row is termed Row Hammering. For row hammering to be most effective two addresses (doublesided rowHammering) are required X and Y, both X and Y must represent two different rows, where these rows are in the same bank. It is important to note the victim are not the addresses accessed instead the victim is a close neighbor to X and Y. It is important to note  did not provide an exploit, instead they discussed an approach to an exploit. However further to  research, an exploit has been demonstrated by Google’s project zero team .
Each of the papers focused on different aspects.
 main focus was DRAM disturbance errors without directly accessing a victim row, instead neighboring rows are continually accessed (hammered), these cause disturbance errors in nearby rows i.e. the victim row. The net result of hammering is a disturbance error which is the equivalent of a bit flip, i.e. the binary cell value changes from binary 1 to binary 0 and vice versa. Central to these disturbance errors is flushing or purging cache i.e. purging the row buffer by using the native CLFLUSH command, this ensured DRAM addresses are accessed rather than the row buffer (i.e. the cache) since rowHammering is dependent on ensuring the addresses being hammered are within the same bank therefore purging the row buffer of a bank is paramount i.e. rowHammer requires the row Buffer to be by-passed. This paper  relied on native commands and required direct access to the host computer. A root exploit was not established, however, authors noted industry were aware of the row hammer concept since 2012. In this paper  selecting the physical addresses to hammer required knowledge of the underlining CPU memory management unit address mapping.
 However further to research completed by , an exploit has been demonstrated by Google’s project zero team , as part of this exploit address selection was reviewed i.e. how to determine X and Y without knowing the underlining CPU memory management unit address mapping. Google's team looked at both randomly selected addresses, and selecting addresses based on cache hits. Google's team found double sided row hammering (hammering rows either side of a victim row) to be most effective and implemented by "naively extrapolating" , (in my view intelligently guessing) addresses using
256k target address (first aggressor row)
256k victim address
256k target address (second aggressor row)
Some key elements from this paper 
- An optimized cache replacement policy agnostic to CPU, leading to not having a need for CLFLUSH command
- A native code implementation without the special instruction
- Some countermeasures are discussed
The main approach implemented by  is to use standard memory accesses to produce an eviction (googles team mentioned this might possible(2) ), and in order to achieve this implement standard cache attack techniques by measuring timing i.e. a cache hit or a cache miss.
One of the main objectives is to optimize the cache eviction process, this requires determining an access pattern. The idea is to obtain near 100% cache eviction and then reduce the addresses without reducing the eviction rate. On haswell CPU this eviction strategy produced an eviction rate more than 99.97%. In order to determine how effective their new eviction strategy; on haswell CPU authors compared their adaptive eviction strategy with LRU and the CLFLUSH instruction. The net result demonstrated the CLFLUSH command as one would expect all memory accesses are DRAM memory i.e. not cached, while the LRU eviction policy had 648 times more cache hits than the adaptive eviction policy. Central to this new eviction strategy is a timing attack, the authors use timing to determine if an address is cached or not. A cache hit will have a faster access time than a cache miss, therefore comparing the timing helps determine if a memory address has been accessed directly or accessed via cache.
One of the countermeasures the authors mentioned was identified by , this countermeasure indicates increasing the refresh rate, it was highlighted a number of manufacturers have doubled this refresh rate to reduce the probability of attack, as such this is a mitigation strategy rather than a preventative one. According to  in order to prevent such disturbance errors the refresh rate would need to be increased 8 times, as such DRAM would spend close to 35% of its time refreshing. Authors found several guides on how to decrease refresh rates in an effort to increase performance as such would increase probability of a successful attack on such customized systems.
 Yoongu Kim Ross Daly Jeremie Kim Chris Fallin Ji Hye Lee Donghyuk Lee Chris Wilkerson Konrad Lai Onur Mutlu, Carnegie Mellon & Intel Labs
Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors
 Seaborn & Dullien http://googleprojectzero.blogspot.ie/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
 Daniel Gruss, Clementine Maurice, Stefan Mangard
 JEDEC Standards DDR3 SDRAM