Dan Hammerstrom

Selected Publications

"Performance/price estimates for cortex-scale hardware: A design space exploration," M.S. Zaveri, D. Hammerstrom, Neural Networks, (Archival Journal of the International Neural Network Society), Elsevier, 2010, DOI:10.1016/j.neunet.2010.12.003.

“Prospects for Building Cortex-Scale CMOL/CMOS Circuits: A Design Space Exploration,” M. S. Zaveri and D. Hammerstrom, IEEE Norchip Conference, Trondheim, Norway, 16-17th Nov. 2009. PDF

“Nano/CMOS implementations of Inference in Bayesian Memory – An Architecture Assessment Methodology,” Mazad S. Zaveri and Dan Hammerstrom, IEEE Transactions on Nanotechnology, Vol. 9, No. 2, March 2010, pp. 194-211. PDF

“Representation, Methods, and Circuits for Time Based Conversion and Computation,” K. Mhaidat, M. Jabri and D. Hammerstrom, International Journal of Circuit Theory and Applications, Published by Wiley InterScience.

K. Mhaidat, M. Jabri and D. Hammerstrom, "Compact Low-Power Time-Based Conversion with Noise Immunity Similar to Digital ", International Symposium on Signals Circuits and Systems (ISSCS), IEEE, July 2009, Iasi Romania.

“Compact Low-Power Time-Based Conversion with Noise Immunity Similar to Digital Conversion,” Khaldoon M. Mhaidat, Marwan A. Jabri, and Daniel W. Hammerstrom, International Symposium on Signals, Circuits and Systems – ISSCS 2009, July, 2009, Iasi Romania.

“CMOS/CMOL Architectures for Spiking Cortical Column,” C. Gao, M. S. Zaveri, D. Hammerstrom, Proceedings of IEEE World Congress on Computational Intelligence (WCCI) - Int. Joint Conf. on Neural Networks (IJCNN), June 1-6 2008, pp. 2442-2449. PDF

“Defect-Tolerant CMOL Cell Assignment via Satisfiability,” William N. N. Hung, Changjian Gao, Xiaoyu Song, and Dan Hammerstrom, IEEE Sensors Journal, Vol. 8, No. 6, June 2008, pp. 823-830.

“A Defect-Tolerant CAD framework for the CMOL Architecture via Satisfiability,” William N. N. Hung, Changjian Gao, Xiaoyu Song, and Dan Hammerstrom, Nanoelectronic Devices for Defense & Security Conference, Crystal City, VA 18-21 June, 2007.

“Cortical Models onto CMOL and CMOS – Architectures and Performance/price,” Changjian Gao and Dan Hammerstrom, IEEE Transactions On Circuits And Systems—I: Regular Papers, Vol. 54, No. 11, pp. 2502-2515, November 2007. PDF

“Architectures for Silicon Nanoelectronics and Beyond,” Iris Bahar, Justin Harlow, Dan Hammerstrom, William Joyner, Clifford Lau, Diana Marculescu, Alex Orailoglu, and Massoud Pedram, IEEE Computer, January 2007. PDF

“Vision-Based Hazard Detection,” Chiu-Hung Luk, Mazad S. Zaveri, Dan Hammerstrom, Richard J. Kerr, ANNIE (Artificial Neural Networks In Engineering) 2006, November 2006, St. Louis, MO. PDF

“Biologically Inspired Architectures,” D. Hammerstrom, in book, Information Technology, Editor: Rainer Waser, Wiley-Series: Nanotechnology, 2008.

“Biologically Inspired Enhanced Vision System (EVS) for Aircraft Landing Guidance,” Chiu Hung Luk, Changjian Gao, Dan Hammerstrom, Misha Pavel, and Dick Kerr, International Joint Conference on Neural Networks, Budapest HUNGARY, July 2004.

“Advanced integrated enhanced vision systems,” J. Richard Kerr, Chiu Hung Luk, Dan Hammerstrom, Misha Pavel, SPIE Aerosense, April 21-25, 2003; Orlando, Florida. Specific Conference (no. 5081): Enhanced and Synthetic Vision.

“FPGA Implementation Of Very Large Associative Memories - Scaling Issues,” Changjian Gao, Dan Hammerstrom, Shaojuan Zhu, Mike Butts, Chapter submitted for book, FPGA Implementations of Neural Networks, Ed. Amos Omondi, Kluwer Academic Publishers, Boston, 2003. PDF

“Platform Performance Comparison of PALM Network on Pentium 4 and FPGA,” Changjian Gao and Dan Hammerstrom, IJCNN 03, July 2003. PDF

“Reinforcement Learning in Associative Memory,” Shaojuan Zhu and Dan Hammerstrom, IJCNN 03, July 2003. PDF

“Simulation of Associative Neural Networks,” Shaojuan Zhu and Dan Hammerstrom, Proceedings of the International Conference on Neural Information Processing, November 2002, Singapore.

“Digital VLSI for Neural Networks,” Dan Hammerstrom, The Handbook of Brain Theory and Neural Networks, Second Edition, Ed. Michael Arbib, MIT Press, 2003.

“Comparing SFMD and SPMD Computation for On-Chip Multiprocessing of Intermediate Level Image Understanding Algorithms,” Steve Rehfuss and Dan Hammerstrom, Proceedings of the conference for Computer Architectures for Machine Perception 1997, Boston MA, October 1997. PDF

“Image Processing Using One-Dimensional Processor Arrays,” Dan Hammerstrom and Dan Lulich, The Proceedings of the IEEE, Vol. 84, No. 7, July 1996, pp. 1005-1018. PDF

“A Digital VLSI Architecture for Neural Network Emulation, Pattern Recognition, and Image Processing,” Dan Hammerstrom, Naval Research News, Office of Naval Research, Three/1995 Vol. XLVII, pp. 27-43.

“Model Matching and Single Function Multiple Data Computation (SFMD),” Steve Rehfuss and Dan Hammerstrom, NIPS Proceedings, November 1995. PDF

“A Digital VLSI Architecture for Real World Applications,” Dan Hammerstrom, An Introduction to Neural and Electronic Networks}, Electronic Networks, Edited by Steven F. Zornetzer, Joel L. Davis, Clifford Lau, and Tom McKenna, Academic Press, 1995. PDF

“Working with Neural Networks”, Dan Hammerstrom, IEEE Spectrum, July 1993, pp. 46-53.

“Neural Networks At Work”, Dan Hammerstrom, IEEE Spectrum, June 1993, pp. 26-32.

“The CNAPS Architecture for Neural Network Emulation”, Dan Hammerstrom, Wendell Henry, and Mike Kuhn, Parallel Digital Implementations of Neural Networks, Edited by K.W. Przytula and V.K. Prasanna Kumar, Prentice Hall, 1993, Engelwood Cliffs, NJ, pp 107-138.

“A VLSI Architecture for High-Performance, Low-Cost, On-chip Learning”, D. Hammerstrom, in Artificial Neural Networks, Eds E. Sanchez-Sinencio and C. Lau, IEEE Press, 1992.

“An 11 Million Transistor Digital Neural Network Execution Engine”, IEEE International Solid-State Circuits Conference, 1991, M. Griffin, G. Tahara, K. Knorpp, R. Pinkham, B. Riley, [Dan Hammerstrom and Eric Means], pp. 180-181.

“A VLSI Architecture for High-Performance, Low-cost, On-chip Learning”, Dan Hammerstrom. Proceedings of the International Joint Conference on Neural Networks, pp II-537 to II-543, San Diego, June 1990.

“Why VLSI Implementations of Associative VLCNs Require Connection Multiplexing,” Jim Bailey and Dan Hammerstrom, Proceedings of the 1988 International Conference on Neural Networks, pp. 173-180, San Diego. PDF

“The Connectivity Requirements of Simple Association, or How Many Connections Do You Need?”, D. Hammerstrom, 1987 IEEE Conference on Neural Network Information Processing, pp. 338- 347. PDF