A FIFO is a "first in first out" memory buffer between two systems with simultaneous read and write access. It means that the data written into the buffer first comes out of it first. If you’ve ever waited in a line, then you understand how a FIFO functions. FIFOs can be implemented with software or hardware. The choice between a software or hardware solution depends on the application's hard drives and the features desired. FIFO can be either synchronous or asynchronous. The difference between them is that the operation of synchronous FIFO is dependent on the clock, whereas the read/write operation of asynchronous FIFO are alternate to each other.

Synchronous FIFOs are the ideal choice for high-performance systems due to high operating speed. Synchronous FIFOs also offer many other advantages that improve system performance and reduce complexity. These include status flags: synchronous flags, half-full, programmable almost-empty and almost-full flags. These FIFOs also include features such as, width expansion, other memories accesories, and retransmit. Synchronous FIFOs are easier to use at high speeds because they use free-running clocks to time internal operations whereas asynchronous FIFOs require read and write pulses to be generated without an external clock reference. 

An asynchronous FIFO refers to a FIFO where data is written from one clock domain, read from a different clock domain, and the two clocks are asynchronous to each other. It is equipped with control logic that performs management of the read and write pointers, generation of status flags, and optional handshake signals for interfacing with the user logic. The individual read and write ports are fully synchronous, but this FIFO does not require the read and write clocks to be synchronized to each other.

Selecting which FIFO memory works best depends on the performance specifications you require—such as access time, data rate, data setup time, and data hold time. Access time indicates the speed of the memory that begins when the Central Processing Unit (CPU) sends a request to the memory and ends when the CPU receives the data. The data rate—or transfer speed—is the number of bits per second that can be moved internally. The data setup time is the minimum time required for logic levels to be maintained in the input lines, prior to the triggering edge of the clock pulse, in order for the levels to be reliably clocked. The data hold time is the interval required for logic levels to remain on the inputs after the triggering edge of the clock pulse, in order to be reliably clocked into the chip.

Read more »

Power management integrated circuits (power management ICs or PMICs) are integrated circuits that perform different functions related to power management. They may have one or more functions including DC to DC conversion, battery charging, power-source selection, voltage scaling, power sequencing, and miscellaneous functions. PMICs control the flow and direction of electrical power. Electrical devices contain internal voltages and sources of external power. PMICs usually incorporate multiple functions into one IC to increase efficiency, decrease size, and create better heat dissipation.

Powervation Ltd. is a semiconductor company that develops digital power integrated circuit system-on-chip solutions. They are used for cloud computing, communication, and high-performance power system designers. They use a multiprocessor SoC architecture for digital power management solutions. The multiprocessor has a proprietary dual-core (DSP and RISC) processor, RAM and Sidense one-time-programmable (OTP) memory, power conversion blocks, and serial interfaces.

The Sidense 1T-OTP stores firmware and DSP code, and security codes, design, and user-specific configuration parameters for the voltage regulator. OTP data is loaded to RAM once the power is turned on. This allows quick access to the processing unit. Some of the other memory options are non-volatile memory such as flash, separate off-chip memory, and read-only memory (ROM).

The benefit of using OTP memory over the other types of memory is that it is smaller and has no additional wafer processing steps. The processor’s firmware is fixed with ROM and any change in content costs time and money. Flash memory is power-intensive and susceptible to corruption. OTP memory not only decreases the time and cost of derivative products but is also capable of handling software modifications and therefore increases the processor’s flexibility. Unlike any other form of memory, only OTP is the only memory that is durable enough to deal with the requirements of power management and other such applications.

Read more »

Whether you’re using a USB flash drive or secure digital cards, you’re using flash memory. Flash memory, or NAND flash, has become a major part of every industry as we incorporate technology into anything and everything. One of the best attributes of NAND is that it does not require being powered on to actively store the memory; it continues to hold memory until manual deletion. This is an important feature because it makes NAND more cost-efficient than DRAM memory, which must be powered on to hold data.

There are two types of NAND flash memory, single level cell, and multi-level cell. The single-level cell, also known as SLC, is the better performing, yet pricier of the two. The SLC holds one bit per cell, while the multi-level cell, which is also known as the MLC, holds two bits per cell.

NAND cells do not last forever, and they were not intended to. The cells have a limited amount of cycles which it can be written over— and eventually, they burn out. To make sure your data is stored safely, it is important the wear is leveled out across all blocks, so the device does not wear unevenly and fail prematurely.

Solid state drives (SSD) are a better version of NAND memory. They play an important role to some industries where heavy loads of non-volatile memory may be necessary. SSDs are a replacement to hard disk drives, or standard disk drives. Because they have no moving parts, they don’t suffer from mechanical latencies and can be subjected to more shock and vibrations than a hard disk drive, making them great for portable use and mobile applications. The SSD is usually able to write 20 GB per day. They operate in hundreds of milliseconds which makes them far more efficient version than a hard drive.

Read more »

If you’ve ever looked into computer memory, you might’ve come across some weird acronyms. RAM, DRAM, SDRAM, DDR, DDR2, DDR3, and so on. You know they’re supposed to be the memory, but you don’t know the difference and now you’re just confused. Here’s a short bit on what you need to know about computer memory.

A DDR3, which stands for the third generation of double data rate that is used to store program code data. Out of the three options, it’s the one you’re better off getting. DDR3 is currently the most standard memory you can get for your computer memory or RAM, random access memory. To be more specific, DDR3 is the current standard for SDRAM, synchronous random-access memory.

Developed in the 1990s, SDRAM was developed to address the inadequacies of DRAM. DRAM was an asynchronous interface, which meant that it operated independently of the processor, which meant that it was slow. SDRAM streamlined the process by synchronizing the memory process to control inputs, so it could queue one process while waiting for another and therefore execute more tasks much more quickly. Eventually, SDRAM, which was operating via a single data rate interface was too slow and replaced with the DDR or double data rate. DDR could transfer data on both the rising and falling edges of the clock signal, operating at nearly twice the speed of the SDRAM. This leads the revelation that memory could run at a lower clock rate, use less energy, and achieve faster speeds.

Eventually, as processors became more powerful, DDR also became insufficient and by 2003, the DDR2 was introduced. Continued advancements in technology drove the demand for faster, more powerful processors and memory with DDR2. So, the cycle continued with DDR3 in 2007. In 2014, the DDR4 was introduced, and 2019 is expected to see the DDR5. Currently, DDR3 is the base standard, but with DDR5 on its way, that could change. Hopefully, now you know a bit more about memory and the acronyms make more sense. 

Read more »

In 2006, several DRAM manufacturers announced that they were supplying the industry’s first DDR3 devices and modules to leading PC developers by early 2007 and that DDR3s would be available by the end of 2007 and early 2008. While a DDR3 memory might seem like much to us today in 2018 because of the release of the DDR4 in 2014, in 2006, the DDR3 SDRAMs were exciting.

The DDR3 SDRAM stands for double date rate type three synchronous dynamic random-access memory. The successor to the DDR and DDR2 and predecessor to the DDR4, the DDR3 is a type of high-bandwidth data storage. With speeds twice that of the highest speed DDR2, higher bandwidth, and an increase in performance at low power, the DDR3 was a revelation in 2006. DDR3 SDRAM devices could transfer data at up to 1600 Mbps, double that of the DDR2 at 800 Mbps, but more efficiently with a supply voltage of only 1.5 V, down from the 1.8 V of the DDR2. In addition, with the downgrade of the supply voltage comes a reduction in power consumed and heat generated that could otherwise damage the component. Overall, in comparison to its predecessors, the DDR3 was more efficient, something that the DDR4 would take further 7 years later.

In addition to improvements made from the DDR2 SDRAM, DDR3 also introduced an asynchronous reset pin, self-calibration, system leveling, and a CAS Write Latency per speed bin. Then, in 2013, DRAM manufacturers announced that 2014 would see the processor of the DDR3, the DDR4. Even more than the DDR3, the DDR4 wowed the electronics industry with better performance and new capabilities. Effectively, the DDR4 rendered even the DDR3 obsolete. 

Read more »

Flash memory is used as a storage device just like a hard drive or Ram. What sets it apart is its size and portability. The ease transferring data from one computer to another using a flash drive is a breeze due to its ease of use. The flash drive can be plugged into a USB port in a laptop or computer, when this is accomplished, you can easily move files from your computer to the flash drive. This makes it possible to transport information through a storage device that’s small, convenient, and compatible with all computers and laptops.

Flash drivers have EEPROM (Electronically Erasable Programmable Read Only Memory) chips. The drive has transistors and columns that align with a cell that make flash memory possible. Tunneling and erasing are vital functions of Flash Memory. Tunneling is used to alter electrons that are essential for flash memory to function and makes erasing files on flash memory is easy.

A great alternative to flash memory drives are flash memory cards. Flash memory cards are smaller and have no noise, no moving parts and it allows quick access compared to a drive. With these benefits, people would ask why don’t we use memory cards and not flash drives since they are faster, smaller, and less chance for malfunction? It’s because the flash drives are significantly cheaper. You can get much more memory for the price with a memory drive, which even though it has drawbacks compared to the memory cards, its still extremely reliable and can be trusted with the files you would like to save or share.

Its clear the benefits of being able to travel anywhere and being able to bring files to be shared or transferred. The ease of use and speed of flash memory is undeniable.

Read more »

Flash memory is used broadly every day in all types of products memory products including mobile phones, cameras, computers and more! Flash is created to last for an extended amount of time especially for its diverse application methods there is a never-ending demand for it. However, there has been an increase in unauthorized resellers who are disposing of used chips in the flash and reselling the flash as a new product.  This can negatively impact the buyer because of the unethical approach to the dispensing method as well as the quality that is being compensated. To avoid situations of buying a falsely advertised flash, the engineers at the University of Alabama began to research a way to identify the possible usage of the flash memory.

Flash memory is composed of three parts which are the gate, drain, and source. The voltage makes its way on the control gate which moves the electrons to down the bottom oxide which fastens it inside the floating gate. The voltage needed is readily measurable with the stored charge.

The chips are removed by driving charge out the floating gate. The engineers found the alteration to the flash, affects the flash memory negatively due to the currents leaking through when it needs to be turned off. This also lowers the rate of the charge moving through the device, which slows down the flash, which takes a toll on the memory’s erase time.

Research that has been reported says IEEE International Symposium in Washington D.C., the engineer’s technique was able to decipher recycled flash memory with as little as three percent usage. Overall the effects of the research played to decipher which flash memory had a history of usage and the long-term effects of what used flash memory is.

Read more »

During the previous couple years, the computing industry experienced some of the most dangerous version of hacking into Intel processors. Many people and businesses were affected due to this unwanted intrusion into their personal devices. Top computing systems that are found in as the AMD, ARM and Intel units are now being affected. Mobile devices such as tablets, laptops, smart phones, and other important devices can be drained of their private information.

According to Source from the UK, Independent.co.uk emphasizes that

“This is one of the biggest cyber security vulnerabilities we’ve ever seen in terms of the potential impact to personal, business and infrastructure computer systems.”

Also mentioning that due to the extreme precision hacking in the core of the products it is often times close to impossible in determining if the product is actually affected or not. Since mostly all devices and CPUs use these standard types of units it very likely that devices globally can and will be affected by this unwanted problem. A term known as “Speculative Execution “is the reason for the unwanted hacking and intrusion. The Primary goal of the S.E was created to work as an enhancement for the performance of a chip but took a turn down south.

The temporary solution has been released by some companies but is not sure to be the permanent solution. Another factor that may take a toll by adding the patches is slowing down of the entire computer processing system. Due to the volume of damage that these problems may cause, the legal matter has also been taking place. These processing companies are practicing what is known as “responsible disclosure” which allows for them to release their issues to the public once having a solution to avoid jeopardizing their name and affect their stock price. One company that took this into practice regarding the hacking into their system was Intel but was exposed by “The Register” and was forced to reveal their issue.

Keeping up with security software on your device can minimize the chances of your device being affected.

Read more »

On February 14 2017, there was an article posted by George Kerr on Cisco which talks about how Cisco has big plans to have a solution to its epic infrastructure. The solution that Cisco has provided deals mainly with electronic health records or EHRs.

Digital media records have been the healthcare industry more seamless which reduces the number of errors and improve coordination of care. Digital media records also gets rids of tangible records that are stored away in manila folders, unsecured paper, and rooms filled with filing cabinets.

With the help of flash storage, paper records are a thing of the past, however the infrastructure of the digital media records have to be “rip and replaced” every time one would want to upgrade the infrastructure. This most likely done on a routine basis of every 3 to 4 years. While this is a lot better than having tangible copies of patient information, the whole upgrading in structure is very costly and time consuming.

Cisco has announced their partnership with Pure Storage to create flash storage which is an instantly upgradable platform. Adding new capabilities and capacity only takes minutes to install. Not only this but it is also very user friendly. One early adopter of the product was able to achieve over 230% ROI from this product.

Read more »

The carbon nanotube memory’s ship has finally came. Nantero made the non volatile random access memory or NRAM which has way passed its sell date. It seems as this this type of technology was not competitive as it is coming almost close to a decade later than other company’s launch dates.

Because this NRAM use nanotube groups that are deposited randomly within a space on a substrate as opposed to being individually place in very specific places within a space, MRAN is able to easily mass produce these products.

Nantero wasn’t in the best shape in the past, however things changed after a deal was made between Fujitsu and the company. Chris Spivey, the senior editor at BCC said that,

“Fujitsu announced that they see NRAM as having a natural place alongside their ferroelectric random access memory (FRAM).”

Here’s what Johnson wrote about what Takashi Eshita, the senior director of system recovery at Fujitsu told Spivey.

“…Although FRAM has excellent advantages, such as high speed, low power consumption, high endurance, and so on, shrinking its size is slightly difficult. NRAM has the potential to be ramped up to large memory capacities, so with that in mind, Fujitsu decided to boost its non-volatile memory lineup by starting to develop NRAM for large-scale memories.”

Read more »

Recent Twitter Posts

ASAP Semiconductor's Certifications and Memberships