Friday, September 25, 2015

The NovoCyte Analyzer Review - Acea Biosciences

Why enter the crowded flow cytometer market with a 3-laser, 13-color analyzer? Why not? Acea Biosciences (San Diego, CA) is yet another upstart cytometry company with aspirations of creating an easy-to-use, affordable workhorse analyzer targeted at the meat of all flow cytometry experiments hovering in the 6-8 parameter range. The NovoCyte is a 3-laser, 13 fluorescent parameter analyzer ready to do battle with the likes of Industry giant FACSCanto-II (Becton-Dickinson), a few newcomers like the CytoFlex (Beckman-Coulter), the Attune NxT (Thermo-Fisher), and other perennial contenders such as the Miltenyi MACSQuant, and instruments from Stratedigm, and Partec/Sysmex. Of course, it's configurable starting at just 1 laser and 3 colors (1/3), with 2/4, 2/6, and 3/13 options (405nm, 488nm and 640nm). It’s pretty clear 3 lasers is the new 2, 4 lasers is the new 3, and 5+ lasers is the new 4. I had a chance to spend a couple of months with the NovoCyte and can say that the goal post for a quality cytometer has once again been moved back. Sure, it’s not in the same class spec wise, with the BD LSR-Fortessa, but then again, how many applications truly require a 5-laser 20 parameter, $350K instrument? Like most things in life, it’s all about the trade-off. Yes, there are some sacrifices you’ll need to make when going with the NovoCyte, but in many cases, it’s a small price to pay because, quite literally, it’s a small price to pay.

No doubt, many readers are likely in the same position as me. I have BD instruments running FACSDiVa and a bunch of users who know DiVa inside and out. It’s also probably true that you have your favorite set of expletives you rattle off every time you use a BD instrument running FACSDiVa. The tyranny of the default is extremely powerful, and the only way to break through the FACSDiVa wall is to create a cytometer that is a pleasure to use. I always like to say there are only two parts of a cytometer with which an end-user interacts - the software, and the sample loading apparatus. If you can nail these two parts of the instrument, you’ll be able to win over a large part of the community. Conversely, if these components fail from a usability standpoint, you can have the best performing instrument out there, and gain no traction at all in the marketplace. The NovoCyte hits both of these key features pretty well. Their implementation of the autosampler is probably the best I’ve seen to date. It’s smooth, extremely flexible in the types of tubes/plates it accepts, it’s fast enough, and frankly, it just looks cool. In fact, one of the more impressive things about the NovoCyte in general, is its build quality. It's encapsulated in a sturdy metal skin that is completely white with a few touches of color here and there. The doors and cover are secure yet easy and smooth to open. Overall, it has a pretty small footprint, even when you add in the fluidics tray and the autosampler. 

The software takes a bit getting use to, but once you understand the workflow, it’s a breeze. You can easily pick up on the design cues implemented in their software - the structure of FACSDiVa, the drag-n-drop of FlowJo, and the ribbon layout of MS Office. It’s friendly enough that you won’t shy away immediately, but it will certainly take you a few minutes to grasp your bearings. The NovoCyte also excels at some other usability features that even surprised a cytometry veteran like me. I especially enjoyed the physical startup/shutdown button on the face of the instrument. This single button not only turns the cytometer on, but also performs the fluidics startup routine before you even turn the computer on. Likewise, when you’re rushing out the door in the evening and you need to shut down the instrument, just click the button once again and it will automagically perform the shutdown routine and power off the instrument. I can’t tell you how many times I’ve rushed through a shut-down procedure skipping as many steps as I could in order to get out the door on time. It’s clear a lot of effort and forethought went into many of these usability features. Is this enough to breakthrough the DiVaDefaultTM? I think it might have a chance, at least.


The NovoCyte’s fluidics is driven by a peristaltic pump on the sheath side of things and a syringe pump on the sample side of things. They obviously got the memo about the need for a pulseless peristaltic pumps since I wasn't able to detect any fluctuations in sample flow during long runs (observing bead intensity vs. time). Also, the syringe allows for volumetric sample delivery and beadless absolute counts. The fluids are stored in a small tray that sits next to the instrument and holds sheath, waste, and a couple bottles of cleaning solutions used during shutdown. The amount of sheath used is relatively small, and although the tank only holds a liter or two, you can easily eek out a full day's worth of sample acquisition using a reasonable amount of between sample washing. Like any syringe based system, the fluidics seem a bit slow; not in terms of volume per unit time, but simply in the operations that need to take place before and after a sample is acquired. The syringe needs to fill, then push its contents though the flow cell, and if you stop prematurely and have extra sample in the syringe, it needs to expel any extra fluid. Syringe pumps also limit how much fluid you can sample at a time. Whereas positive pressure systems and peristaltic pump systems can acquire freely as long as there's fluid in the tube (and beyond!) syringe-based systems are limited in how much volume they can pull. The NovoCyte pulls between 10 and 100uL of volume at a time, likely ok for many applications, but for those who require a larger sample volume, concatenation is your friend.

I would argue the heart of the NovoCyte system is its autosampler, and although it’s not a standard feature of the instrument it’s certainly a must-have add-on in my book. It comes with a 24 tube rack that holds 12x75mm “FACS” tubes as well as a microtiter plate holder. The plate holder can accommodate standard 96 well plates (v, flat, and u) as well as 96 deep-well plates and, my personal favorite, 96-tube rack (1.2mL “bullet” tubes racked in the standard 96 well format). Once you’ve mapped out which wells/tubes you’ll be sampling from, the system affords you true, walk-a-way operation complete with email notifications. Sample flow rate, number of washes and mixing is all programmable to allow you some control over optimizing against carryover or for speed.

As mentioned above, another key usability feature is the system’s one-button startup/shutdown routine. One thing that I like to test on all my instruments is what I call the “cold-start to dots on the plot” time. As the name implies, I’m interested in knowing how long it takes to start the instrument, perform any required start-up routines, run the QA procedure, put on a sample and see dots appear on a plot. For some instruments this can be quite long, sometimes as much as 20-30 minutes. The NovoCyte is extremely fast in this regard. By having a physical button that powers on and performs the fluidics startup, you can do that concurrently with things like logging into the computer, launching the software and preparing for QC. The QC process is bare bones, but actually mimics our standard procedure used on the rest of our instruments - a single peak bead’s MFI and CV are tracked over time and plotted on a L-J plot.


The system has 3 Coherent lasers (405, 488, 640) and 13 fluorescence channels as well as the standard 488 light scatter parameters, FS and SS. However, to save cost and space, only 8 detectors are present. Detectors with a standard filter are re-used across multiple lasers. For example, the far red detector that has a 780LP filter detects QD800 when the event passes through the 405nm laser, PECy7 when passing through the blue laser and APCCy7 when passing through the red laser. In theory, this is no different than having those 3 detectors with the same filter on your instrument in physically separate places. In fact, if you opened up any cytometer chances are you would see the same filter repeated on different detectors. By adding in a delay on the detection side of things, the signals terminating at the same detector can be differentiated with just as much accuracy as three separate detectors. Of course, you can test this empirically by running PECy7 and APCCy7 single colors on the NovoCyte and another instrument that has the same filters but on separate detectors and then compare the spillover values.

When I set up my multicolor panels, I like to stagger the emission across different laser lines. For example, maybe I’ll use BV705, PECy7, and APC together instead of QD800, PECy7, and APCCy7. Doing this on the NovoCyte is a bit more challenging and stacking up 8, 9, or 10-colors in a single panel could pose more spillover issues with this shared detector setup when compared to a 1:1 setup with slightly optimized filters on each detector. It’s for this reason that I don’t think this instrument will actually be a 13-color instrument, but will do just fine with 6 or maybe 8 colors simultaneously. Maxing out the NovoCyte causes an over-population of a few areas of the spectrum, especially the far reds. The table below shows which detectors are shared across lasers. Another point to note is that since the physical detector is shared, that means the "optimized" voltage is shared as well. Whereas normally you might optimize the voltage for BV650 differently than APC, here you're given a single voltage that is "optimized" for both. Let's follow this example a bit further. When I put on unstained cells and adjust voltages, I might expect there to be a higher voltage needed for longer laser/collection filter wavelengths (i.e. more voltage needed for red laser excitation and emission compared to blue or violet). However, with the NovoCyte, I'm given one, predetermined voltage that should work for both. And, to be honest, it does seem to work, although the obsessive compulsive perfectionist inside really wants to fine tune voltages for these two colors independently...but I'll get over it.


Similar to the BD Accuri C6, Acea Biosciences chose to go with a fixed voltage system to allow for ease-of-use. It’s well known that pretty much the only thing an end-user can screw up when acquiring data is improperly adjusting the parameter voltages. Having the voltage too high or too low can have dramatic effects on the data. It’s also safe to say that if you looked at the voltages on your cytometer across many people's applications you’d likely find a small range of paramater voltages. So, it’s not surprising that setting a fixed voltage can definitely be beneficial to the masses. Of course there will always be those exceptional cases where you have super bright staining or highly autofluorescent cells, but again that’s the exception, not the norm. As you’d typically find, the spacious 7-log scales, when used with real samples, shrinks down to about 4 decades as the unstained populations typically fall around 10^3. However, the fixed voltage, easy setup philosophy pays dividends when you apply it to a standard workflow. For example, I can walk into the lab, click the startup button, run QC, load a plate, setup my well collection criteria (# of cells, volume, time) and click go. I don’t have to open a bunch of plot and setup voltages or anything like that. I could conceivably collect data blind, dump it into FlowJo and be done. Workflows don’t get much easier than that. Anticipating some complaints in this regard, Acea Biosciences allows for admin-level adjustment of the voltages. This could be handy for those edge case situations where you need a bit more dynamic range, however, as noted above, you still need to be careful since you're actually adjust the voltage for as many as 3 parameters, not just one.


I was a bit perplexed at first when it came to using the software. The sample tube navigation area (akin to DiVa’s browser window) is quite DiVa-like. There’s a top-level container (experiment) with a sub-level group (specimen) that contains samples (tubes). It also allows you to differentiate between group level settings and analysis and tube level settings and analysis. Here, instead of copying and pasting between tubes/groups, you can use a FlowJo-esque drag and drop functionality. The software operates in two modes, and you can switch modes on the fly, that is, you can go from a live acquisition view to a data analysis view. As I’ve mentioned before, I don’t care too much for data analysis features clogging up my acquisition software. One of my key complaints against spending resources developing analysis tools is that anything you build into the acquisition software is not going to be as fully-featured as FlowJo, so why bother. I like to use the example that there are no acquisition software packages that do cell cycle modeling. Well, I can’t use that argument anymore since the NovoCyte software does in fact have cell cycle modeling built-in. I don’t think this changes my view, but if you were in a lab that had one instrument and no prior analysis support, this could be a case where you might acquire and analyze in one place. Outside of that, the software can be as uncluttered as you'd like. Again, since there's no need to adjust voltages, there's really no need to sift through an endless sea of plots and region hierarchies.



Using the Spherotech 8-peak Ultra Rainbow bead set, you can get a glimpse of both the resolution as well as the range among all the channels. Here, they are separated by Laser line (colored bar along the left) as well as which detector is being used. Remember, the plots colored red (for example) are all representing the same detector and same filter, just time gated according to the laser intercept. The differences in resolution, therefore are not a function of the detector or filters, but simply a difference in excitation wavelength and power, which are exciting the mixture of impregnated dyes differently.


Dim population resolution is sort of a mixed bag. While the system performs well using the Blue Laser (FITC, PE, PECy7), APC and Pacific Blue are on the high end of resolution. Although the data is not displayed below, other areas where resolution didn't look as good were the red emission channels off the violet laser line (QD800, or BV650, for example). Don't get me wrong, they're still usable, you just won't be using them for your low abundance antigen. 


End-to-end linearity was good. Looking at stained CENs, you can see a slight deviation at the higher end of the channels, when normalized against the 2N population. The residuals from the perfect 1:1 line only deviate a little over 3%, which is quite good. The other interesting thing is the fact that it is quite easy to fit many generation of nuclei on a single scale due to the dynamic range on the NovoCyte. Here we are calculating values from 1 to 12 nuclei with room to spare!


For the carryover test, I used the default wash options for the NovoCyte sampler. Here I used the same PI stained CENs from the linearity test, but alternated wells with and without PI. In well 1, I ran PI stained CENs (not shown above). In well 2, I ran unstained CENs (first histogram overlay). In well 3, I ran PI stained CENs (second histogram overlay), in well 4 I ran unstained CENs after 1 wash (3rd histogram overlay), and in well 5, I ran unstained CENs after a 2nd wash (4th histogram overlay panel). What you can see, is in the 3rd histogram overlay, immediately after running PI stained CENs, you can see both cellular carryover (the red peak composed of 372 events compared to the background well of 100 events) as well as dye carryover (shown by the increase in MFI of the unstained CEN peak - blue histogram MFI). After the 2nd wash and collection of unstained CENs (4th histogram overlay) the carryover count goes back down to background levels and the MFI of the PI- peak (blue histogram) goes back down to background levels (more or less). So, to achieve less carryover, one would simply do a 2x wash between wells instead of a 1x wash. 


I think the killer feature of the NovoCyte is usability. It performs well enough on all fronts, and although its specs may not be the very best, I think it more than makes up for that with its ease-of-use. Don’t expect it to blow you away by its performance, but then again, it does a fine job. The other variables that come into play when working with a new company is support and service in the field. Will they have enough staff to support instruments located all over the U.S. or possibly worldwide? This is somewhat of an unproven issue. Instrument owners in the field have been pleased for the most part, and one thing that repeatedly comes up is the responsiveness of Acea Biosciences with regards to issues or feature requests. I guess that's one good thing about working with a startup company, they can't afford to lose any customers or get any bad press. Again, aside from its above average usability feature, I think what sets the NovoCyte apart from its competitors will also be price. For the right price, this could very well be the best instrument around...depending on its price.

Thursday, March 26, 2015

Index Sorting - From FACSDiVa to FlowJo

We recently upgraded our FACSAria to FACSDiVa 8 running on Windows 7 primarily for the ability to do index sorting. Getting used to a brand new set of DiVa issues and quirks has been difficult, but we soldiered on nonetheless. After scouring the web for resources on both index sorting and analyzing index sorting data outside of FACSDiVa, I decided to compile all the resources in one place. They are out there, it's just a pain to jump around to various sites trying to compile all the information together. I've done the leg work already, so read on to get the info. Of course, I'm sure there are more elegant ways of doing this in other programs or even in FlowJo, but I needed this info yesterday, so I'm documenting it here for future reference. 

Figuring out index sorting in FACSDiVa. You may think index sorting is no more than checking a box in FACSDiVa, but there are enough one-off situations that arise that it really warrants a separate FAQ. There are two resources that are quite helpful in figuring this part out. The first, oddly enough, is BD's very own Index Sorting Manual (<-- fixed bad link), which comes as an addendum to the FACSDiVa software manual and may not even be installed on your computer or available for download from BD's website. I only came upon this after our BD service engineer sent me a copy of it. The second resource is a document presented at GLIIFCA 2014 by Matt Cochran (University of Rochester), in which he outlines some of his tips and tricks for working with index sorting in FACSDiVa 8. 

So, let's assume you figure out how to successfully perform an index sort in FACSDiVa. You should have a Pre-sort FCS file of your entire population, and an Index sort "tube" for each plate you ran. You can export both (or all) of these as FCS files. There is a decent interface for looking at your index sort plate information within FACSDiVa, but if you're use to doing all your analysis is FlowJo, you probably want to bring that data over at some point. And here's the fun part.

Analyzing index sorting data in FlowJo. I have an application where a user is index sorting based on a range of FITC intensities. The resulting plate will be a mix of FITC low and FITC high clones. The goal of index sorting, in this case, is to retain the original FITC intensity information for each well after the sort. What follows below is A method (not THE method) I stumbled upon to go from an index sort file from FACSDiVa to Figure 1 below. I'd really love for someone to tell me there is a way easier way to do this in FlowJo.
Figure 1. Heatmap analysis of index sorting file.

Figure 2. Running the initial script to create 96 populations
Step 1: Use the Script Editor index sorting example from the Daily Dongle Blog (or see Addendum below regarding the method in Version 9). You simply copy and paste the script starting with  /** --- Iterate samples --- **/ all the way through gate.update(); } and paste it into the script editor window (under the tools tab of the ribbon) in version 10.0.7 (if you have access to the 10.0.8beta version, I would do this step in that'll see why later).  Highlight your index sorting file in the workspace and click the run button in the script editor window. You should now have 96 populations under your index sorting file. If you end up with a bunch of "-" where it usually says the number of cells in each population, click the refresh button at the top of the workspace window and then it'll show you that there is 1 cell per region (Figure 2). 

Step 2: The next step is to export each of these populations as its own FCS file. In essence creating 96 FCS files. The problem here is that you can do the initial index sorting script in version 10.0.7, but you can't do the export to 96 FCS files in 10.0.7 for the Mac (I think you can do this in the windows version, but I'm not sure). You can do the export to 96 separate files in Mac version 9.8.3, but you can't do the initial script in 9.8.3. So, if you can do this all in 10.0.8beta, that's your best bet (or on windows). So, in 10.0.8b, you can highlight all the populations and choose export (right click or within the File tab in the ribbon) and export this as 96 FCS files. 

Step 3: Using the plate layout to create a heatmap. The last step is to load the 96 FCS files into FlowJo v10.x.x and assign the Well ID keyword to each of the files corresponding to their position on the plate. Now, the files are in chronological order going across and then down (in serpentine fashion). So all you have to do is add the Well ID keyword as a column and copy and paste a list of Well IDs (A1 - H12 in serpentine fashion) from a spreadsheet. BUT WAIT, THERE's MORE! If you're doing this on a Mac, this post from the Daily Dongle states that since Mac Excel copies data in the ANSI format you won't be able to paste into FlowJo, which only reads the Unicode format. To get around this, create the Well ID list in Google Sheets and copy and paste from there (Google Sheets copies data in Unicode format). Now that you have a Well ID associated with each of the files you can use this link to in FlowJo's documentation to set up a heatmap of your index sorting data.

And there you have it. Please leave a comment below with your preferred method of analyzing index sorting data using whatever software you like. 

Addendum #1: Using the script is somewhat cumbersome. Thanks to Helene Dujardin (from HCD Bioexperts) for the tip below:

"There is another way in version 9, has there is an option for index sort analysis.  Select your sample and go to the menu Platform/ Event number gate / Create Indexed sort gates. It will directly create a gate for each of your well. Each gate name will be the corresponding well ID.

You can then export each of your gate as a new fcs file also with version 9. Your exported fcs file name will include the well ID if your original fcs file name is not too long (you can change it by changing the $FIL keyword)."

Addendum #2: Using the methods outlined in Addendum #1, I'll add one more point of interest. When you export the Index FCS file from FACSDiVa, you might get a really long name (Specimen_001_Index_Tube_001.fcs). FlowJo v9 freezes when you try and export all the regions as FCS files, so you'll need to rename the files after you import the parent into FlowJo. I've been renaming them (CMD+R shortcut) INDX_1, INDX2, etc... Now, when you export the regions as FCS files they'll be labeled INDX_1_A01, etc...

Wednesday, December 31, 2014

Flow Cytometry Core Facility New Year's Resolutions

It's that time of year again when the gyms are packed and weight-loss commercials air continuously.This year, why not turn you attention towards your core facility and come up with some resolutions the whole lab can take part in. The best part is you'll have help from the rest of your lab mates to keep you on task.

So, just as we do with our personal lives, allow me to present an ambitious 10 resolutions for the UCFlow core facility. Presented in no particular order, I give you:

  1. I'd love to devote more time towards taking better care of our instruments, in terms of routine maintenance and a more streamlined QA process across the board.
  2. Do a better job getting administrative tasks like billing/invoicing/usage tracking/usage analysis done on-time and with greater regularity.
  3. It's always nice to see how the work done in the core fits into the bigger picture, so I would like to go to more of my user's talks on campus.
  4. It's pretty clear data analysis is a hot topic these days, so I want to focus more attention on complex data analysis solutions for users (is R worth it?, try more advanced stuff in Cytobank or FlowJo?, etc...)
  5. Who can't use more/new instruments. You'll get none of the instruments you don't write a grant for. I think I need to be more aggressive in my pursuit of new funding sources for instrumentation.
  6. Blog more often (a perennial resolution for me).
  7. I'm convinced that the Hangouts on Air that we do in the Cytometry Community on Google+ are super useful, and so I'd like to turn that into a more regular thing. 
  8. I've always thought that eventually core facilities would collapse into each other to create mega technology centers. But, before that happens, I would like to start by increasing interactions with other core facilities on campus to see what they're doing and what's new in technology in other fields.
  9. It use to be the rule in our core that if you went to a meeting, you had to present something. I haven't been as faithful to that rule as I would like, so I'm bringing it back.
  10. Of course this last one happens all the time, but I would like to focus some attention on re-evaluating facility costs with greater scrutiny to determine where reductions can be made.
Well, there you have it. I just hope I'll be able to hold onto these longer than my annual attempts to get back "in shape." How about you? Any resolutions you'd like to add for your core facility? Leave a comment.

Friday, December 19, 2014

Core Facility Acknowledgment Accounting 101 - How to make sure your work is being recognized.
BioTechniques Article on SRL Attribution
A recent article in Biotechniques has spurred some interesting discussions in the Academic Core Facility (or as we cytometry cores like to call them, Shared Resource Laboratories - SRLs) world. The gist of the article states that all too often core facilities are not properly acknowledged in publications that clearly are using the services provided by their institutional cores. The flip-side of this argument is that investigators are already paying for the services rendered so that fee is essentially all the "acknowledgment" that is required. However, since many times core facilities are partially funded by government agencies, the services (and more accurately the service recharge rates) are being subsidized. Therefore, the payment isn't payment enough.

Whether you agree or disagree with this basic tenet is really beyond the scope of this post. What I'd like to share here is my way of fostering the proper relationship with my users such that they feel compelled to acknowledge the excellent work of the core instead of feeling obligated to do so.

What follows is basically a three-part approach to accomplishing the goal of being acknowledged as a core facility in publications that utilize your services. The reason you may wish to do this could vary, but likely involve justification of your core facility's existence to your institution's administrators or various "Centers" you may receive funding from. For example, as part of the University of Chicago's designation as a Comprehensive Cancer Center from the NCI, we must keep track of cancer-related publications that utilize our core facility. So, obviously it would be easiest for us to search PubMed for the inclusion of our core facility's name or even the cancer center support grant number in the reference. However, many times our facility is omitted from the acknowledgement section of the publication. To help modify this behavior, we need to first find the publications, then organize them, and lastly reach out to our authors/users to help them understand why acknowledgements are important. Here are these steps.

Part 1 - Finding publications that should designate attributed to your core facility's work.

Fig. 1 - Keywords to find references based on your core's services.
You can use the "Saved Searches" functionality within PubMed to find relevant articles and have them emailed to you directly as soon as something meets the search criteria. There's already a good tutorial on PubMed that will walk you through the steps, so I won't go into that in great detail, but let me summarize my steps.

Figure 2. Part #1 of search yields over 168,000 results.
I jump right into the Advanced Search Builder in PubMed using various keywords for different parts of the search structure. For example, I limit the search results to an affiliation of University of Chicago. There are a few external users that I'd like to track as well, but I put them in a separate search. The first part is to put in keywords based on any part of the text that your users may use to describe what they did in your facility. Remember, many users refer to any part of flow cytometry as "FACS" so you'll want to make that part of your search criteria. Figure 1 shows you some of the ones I use (note the use of 'Or' boolean to search on any of these keywords).
Figure 3. Search restricted to affiliation of University of Chicago

Next I use the "Add to history" link near the search to hold onto those search results temporarily (Figure 2).

I click the "Add" link next to search #1 to add these 168,000+ results back into the builder, and then refine the search by using the "And" boolean and restricting the "Affiliation" field with 'University of Chicago (Figure 3.)

Figure 4. Search based on keywords, affiliation, date range
You can further refine the search based on Date ranges or excluding reviews or a bunch of other search criteria using the same strategy (Add to history, then add those results back to the Builder and refine again). I find this method of going back and forth between the history table and the builder easier than trying to put everything into one complex boolean structure. Click search to view your results (Figure 4).

Once you've created your search criteria and confirmed that it is giving you what you've intended, you'll want to save the search, using the "Save search" link below the search box. Figure 5 shows you some of the options available for setting up the saved search. Note that you'll need a PubMed profile to set this up, so the first time you try and save a search, it'll ask you to create an account. Here, I've chosen to send me an email
Figure 5. Saving the search and setting up email digest.
weekly on Mondays (when I'm likely to have free time) so I can review the new references. I've also placed some text (or even a custom #) so that I can filter my email properly and it doesn't get lost amongst the email clutter. I save the search and wait for the emails. By the way, you can now set up all sorts of notification. For example, I've recently been doing a lot of microparticle stuff, so I have a separate digest setup to send me email notifications of new publications using flow/image cytometry to analyze microparticles (or microvesicles or micro particles, etc...)

Part 2 - Organize references and tag them to easily create reports later.

In part 2, my goal is to receive these email notifications, skim through the publication and then find a way to organize the references neatly and efficiently.

Figure 6. Email notification from My NCBI
The emails arrive in my inbox on Monday mornings as references become available. If there are no new references, you will not get an email. Figure 6 shows an example of what this email looks like.

Next, I follow the link, and read through the manuscript to ensure the work being reported was in fact from my core. If I'm unsure, I can always ask the author, but I tend to recognize work done on my instruments.

Figure 7. One-click add to Zotero button in URL bar
One thing that becomes evident is you need to have a way to manage all these references. There are a ton of ways to do this from the most rudimentary word doc or spreadsheet to sophisticated software management tools. The tool I like for this part is Zotero. It's similar in function to things like Endnote or Mendeley, but it's basically an organization tool for references. The part I like most about Zotero is that there's a Chrome extension that allows one-click adding of references to my database (Figure 7). Plus it will go out and find the PDF of the full-text reference and store that locally as well (when available). It lives in the cloud and can be accessed anywhere. 

Figure 8. Zotero Organizing tool for references (running on Mac)
Once in Zotero (Figure 8), I can add tags to the references to help organize them further. I like to tag things by services used (Cell Sorting vs. Analyzer Usage vs. Other things), Instrument referenced (e.g. FACSAria), Whether this could be used for my Cancer Center grant renewal (UCCCC), and other informative tags. Then, down the road when I need to pull up some justification for a new sorter, I can include a list of publications that utilized the cell sorting service or maybe even a specific sorter.

This makes organizing and searching through references a breeze.

Figure 9. Thanks for the acknowledgement
Part 3 - Compel investigators to acknowledge your core facility.

Now comes the hard part. How to suggest to your facility users that they should be acknowledging your core without sounding like a jerk.

As I'm skimming references, I'll quickly jump to the acknowledgement section and check for recognition of the core, or perhaps individual members of the core (either is fine with me). If the user does acknowledge the core, I make sure to send them an email thanking them for doing so. This positive reinforcement goes a long way toward ensuring this type of action recurs in the future. I also explain why it's important to us that the core be acknowledged. An example email is shown in Figure 9. Of course congratulating them on a job well done can only help to sweeten the deal.

Figure 10. Maybe next time...?
If I see there is no mention of the core in the acknowledgements or methods section, I'll send a similarly positive email, but ask them to consider acknowledging us in the future. I make sure to include some example text of what I would like them to say, as well as send them a link to the example text on our web site (Figure 10).

Of course, you can save these emails as templates and simply change the name and journal to personalize them.

The responses I've received from these emails has been tremendous. I think they are both appreciative of the recognition of their work as well as understanding of the needs of the core to be recognized.

We all understand the need for metrics such as publications and their importance in validating the success of core facilities. However, instead of taking a passive approach and hoping people read your web site asking to be acknowledged, the method proposed here takes a proactive approach that has already increased the desired result.

PubMed is pretty comprehensive, but there could be other sources for finding work being discussed that should point back to your core facility. Magazine articles, intra-institutional articles or highlights, blog posts, etc... all should be explored and stored. You can use a series of other rss feeds or Google search alerts to help you find this information too. Asking a PI to mention the core facility in an intra-institutional newsletter is certainly within your purview.

Happy Hunting!

Thursday, September 4, 2014

A First Look at the Beckman Coulter CytoFLEX - Strong Performance in a Small Box

Over the past few years, we've been inundated with small, inexpensive cytometers with the promise that they can perform as well as the big boys. Up until now, I would have told you not to waste your time... up until now.

In 2013, an unknown company called Xitogen set up a booth at the annual CYTO conference. Before long, there was a buzz racing through the exhibit floor aisles of a flow cytometer starting at ~US$25,000 (1 laser, 2 colors). The Chinese company, headquartered in the Suzhou Industrial Park, set out to provide an alternative for Chinese researchers to acquire affordable flow cytometry instrumentation without having to deal with overpriced imported hardware from the big players. With U.S. zero install base, and zero user-generated data, CYTO 2013 came and went, and the buzz surrounding Xitogen died out. It was pretty obvious the better known cytometer manufacturers would be taking a look at the company for a possible acquisition, and in April of 2014, Beckman Coulter announced they would purchase Xitogen for an undisclosed amount of money.  The acquisition was finalized in June 2014. At CYTO2014, Beckman Coulter revealed the re-branded instrument now called CytoFLEX.

I had the chance to spend about a month with the CytoFLEX and what follows are some of my thoughts about the key features, successes and failures of this instrument.

General Technical Specs:

The CytoFLEX came to me as a 3 laser system including a 50mW 488nm laser, a 55mW 640nm
Beckman Coulter CytoFLEX Analyzer
laser, and a 93mW 405nm laser. The system also has 9 fluorescence channels in a 4-3-2 configuration, respectively. In addition, there are 3 light scatter parameters, the typical blue laser scatter yielding forward and side scatter, and an additional side scatter parameter off the 405nm laser. Pulse height and area are collected for all parameters, and a width signal can be selected for any one of the parameters. The fluidics system is controlled through peristaltic pumps for both the sheath and sample lines, and the sample volume flow rates can range from 10ul/min up to 240ul/min with 10, 30, and 60 ul/min presets (referred to as Low, Med, and Hi, respectively). A single tube holder with built-in backflush loads samples into the instrument one-at-a-time, and the hardware is controlled by the bare bones, but highly functional CytExpert acquisition software.

The system that is suppose to ship some time in October will be configurable with 3 spatially separated lasers (with a 4th coming soon?), with a variety of laser options and colors available.  The base configuration should include 3 lasers, and 13-colors in a 5-5-3 config (violet, blue, red, respectively).

Look and feel:

A look inside the CytoFLEX revealing lots of unused space.
The instrument itself is quite small, fitting roughly into a 40cm cube, but even the box itself seems to be too big for whats being housed inside.  A peak under the hood reveals a ton of unused space (multiwell autosampler, perhaps!!!).  Pretty much every component on this instrument looks like a fraction of its counterpart on more common cytometers. However, it's quite clear that every penny possible was pinched in the manufacturing of this instrument.  Everything about it screams cheap.  That's not necessarily a bad thing per se, but as soon as you start opening up lids and doors and see some of the components inside, it becomes clear how they were able to create a functional instrument at bargain prices.  Beckman Coulter has said that part of what they will do to the CytoFLEX is to add some polish to the components without adding cost.  A final product and price point has yet to be revealed, but we expect to see it in the wild this fall.


The Sheath and Waste tanks sit beside the instrument and have a single output/input line, respectively. They hold about 5 liters, which should last most of a day with moderate use and reasonable amounts of backflushing. The preferred sheath for this system is some high quality H2O (Insert Waterboy reference here). Beckman Coulter will likely sell you a box of water at a premium and call it "Coulter Sheath" but you'll be just fine grabbing some DI from your MilliQ system.

Again the system moves fluid throughout using a pair of peristaltic pumps. The non-fluid movement of peristaltic pumps tend to make them not ideal for a system that requires stable fluid flow, but in testing the CytoFLEX, I saw no fluctuations in any of the channels over long runs with beads (plotting bead intensity vs. time). This type of instability due to peristaltic pump oscillation had been reported in some iterations of the Accuri C6 when it first came out. In the CytoFLEX, special attention was paid to create a pulseless peristaltic pump, which definitely holds true in my testing.

Although the sample volume flow rate has a custom setting that allows it to go up to 240ul/min, in
Close-up of the sample tube loading arm.
my tests, I saw dramatic declines in scatter profiles and less obvious, but still present, losses in resolution of fluorescence profiles beyond 100ul/min. I think the 240 setting would be great for cleaning the sample line out, or maybe forcing through a stubborn clog, but not for collecting data.  This type of flow rate is pretty much on par with other hydrodynamically focused fluidics system (unlike, for example the Attune that uses acoustic focusing and can easily go up to 1000ul/min with minimal degradation of profiles). Although the 80um wide beam spots may insulate the wide sample core stream from really poor resolution.

The sample loading stage is a bit funky at this point.  The loading stage moves in and out with the smoothness of 20 grit sandpaper sliding across berber carpeting (i.e. not smooth at all). This loading/unloading operation slows down the process just enough to be annoying, but you get use to it after a while. No plate loader (yet), No multi-tube loader (yet).


The optical system on the CytoFLEX is the biggest departure from any other instrument developed.  A lot of the technology is proprietary, and as much information as I was able to deduce I'll share here, but I could be flat wrong on some things, so take what I say with a grain of salt. 

Custom made Laser modules
Don't expect to see the familiar Coherent Laser Cubes on this instrument.  In fact, these lasers are custom made, in-house in the Chinese facilities (where the entire instrument is manufactured).  When you take off the laser compartment cover, you're greeted with non-descript tiny black boxes with a sticker on them telling you which laser it is.  Here is where they can save a lot of money.  Without being beholden to the Coherent behemouth, they're not locked into Coherent prices.  And, since they are making the lasers themselves, they can customize everything about them according to this specific instrument.  Worried about the quality?  I was too, until I saw the performance.  Of course, what I'm not able to test is long-term laser life on these.  The stated spec on Xitogen's web site for lifetime is 20,000 hours, but this hasn't been tested in the field, as far as I know.  The air launched beams go through the typical steering and shaping optics and terminate at the flow cell in front of one of a 7 "pinholes" on the instrument.  Beam sizes and power efflux are restricted to a 5um x 80um gaussian profile courtesy of the beam shaping optic and its large 1.3 NA, which means most of the laser power gets focused to the "pinhole" in a slit (N.B. I say "pinhole" since it's unclear if there are actual pinholes in the traditional sense or some other sort of voodoo magic).  This should allow for maximal excitation of fluors and minimal crosstalk between laser lines.
A look at the laser path with the covers off and interlock defeated
(Don't try this at home kids!)

Emitted light is collected by fiber optic bundles which carry the light to the detector blocks. The detector blocks, referred to as Fiber Array Photo Detector Modules or FAPDs is where all the innovation takes place.  The first thing you'll notice when looking inside the FAPDs is the small size of the filter sticks.  Pulling one out reveals a tiny piece of glass no more than a few millimeters square. However, the rest of the components inside the FAPD are completely foreign to someone who's looked at dichroics, bandpasses, and PMTs his whole cytometry life.  The light exiting the fiber passes through a wavelength division multiplexer, which acts like conventional dichroics to partially
Looking into the FAPD with a filter stick removed.
split the light into distinct ranges, and then the light is further refined by the bandpass filters before hitting the photodiode.  Photodiode? Don't you mean PMT?  No, you read that right, this system uses Avalanche Photodiodes (APDs).  These semiconductor detectors are well-known for their high sensitivity, and silicon based APDs have good quantum efficiency in the visible and near-IR range as well as low noise.  If they're so sensitive why haven't they been used before?  Good question, and as far as I can tell, the problem has always been the amount of voltage that needs to be applied to achieve high sensitivity and this high voltage causing breakdown of the APD.  Somehow, this has been circumvented in the CytoFLEX. The other interesting thing about the detectors is that the response of the APDs across the entire range is absolutely linear. They stand by this fact so much so that if you set up compensation on FITC vs. PE at one set of voltages, and then change the voltages, the system will automatically adjust the compensation values to take into account the new voltage settings.  This can only happen if the response is linear from end-to-end and therefore compensation is merely a mathematical equation with voltages as one of the variables.


The system uses 16-bit A/D converters and boasts of 7-decades of dynamic range.  Normally 16-bits doesn't get you that much range, but by oversampling the pulses at 40MHz, and adding up all the samples, a full 7-log scale can be achieved.  However, like most of these large scales, the first decade tends to exhibit poor resolution and is "hidden" by default.  So the scale goes from 10^2 up to 10^7. Qualitatively, I will say that I was able to resolve all 8-peaks of the 8-peak rainbow bead set with some room on both sides of peaks 1 and 8 - that doesn't always happen.

One of the only complaints I had about this instrument was the loss of data at moderate to high event rates. This has to be due to the pulse processing speed of the electronics system and its inability to process the signals fast enough.  It's likely influenced by sample concentration and the system's dynamic integration window - not unlike the FACSDiVa window extension setting.  If the window is reduced, % abort would likely decrease.  Also, increasing the threshold would also have the effect of better resolution between pulses and thereby decrease the abort rate. I did not explore either of these options when running and just used the default window extension and threshold. As you can see from the chart, even going at a moderate rate of 10,000 events per second yields an alarming abort rate. Going even faster results in a recovery of 50% or less. It's important to separate your ideas about % aborts on analyzers from high-speed sorters.  Cell sorters have the advantage of pushing the cells through at very high velocities resulting is narrower pulses and an easier time resolving two closely related pulses.  But, on analyzers, the cell velocity is much slower, resulting is broad pulses and more difficulty resolving closely related pulses. Therefore, the abort rates are typically going to be higher on slow flow analyzers, however we're not as aware of these abort rates on analyzers because we are
always only concerned with frequencies of populations and not absolute yield of populations (like on cell sorters).  So, 10% abort over 20,000 events per second might be reasonable, however, 10% abort rates over 10,000 eps is probably not.

For this test, I created a concentrated sample using a suspension cell line, which, at 60ul/min should yield 50,000 events per second.  I then created serial dilutions from there all the way down until an expected 2,500 events per second.  I ran each tube on the instrument and recorded the event rate displayed by the system's counters. 


If you've used CellQuest and FACSDiVa in your cytometry lifetime, you'll feel right at home here. There's not much to say about the software except that it works.  It was super easy for me to pick up.  I was shown nothing as far as how to operate the instrument, do compensation, etc... and I was able to figure it out with minimal struggle. The CytExpert software does one thing really well and that is it gives you a large, unobstructed view of your data, and just enough controls in a thin side panel to allow you to acquire data.  I'm sure there are some analysis tools built in, but I don't care, I just want to acquire data, dump it into FlowJo and worry about analysis later. It gives you the ability to do automated compensation, use biexponential display, perform gating, and show stats windows. It would be interesting to see if something like Kaluza-G would ever make it onto something like this. But then again, Beckman Coulter already has 900 acquisition softwares already, what's one more!


Finally the good stuff. What can I say, this thing rocks. In terms of fluorescence sensitivity, it beat the pants off of anything I've ever tested full stop. I've put a range of values together for fluorescence resolution that shows the spread of instruments I've tested.  The value (called qNORM) represents the lowest number of antibodies bound that can be resolved from unstained lymphocytes.  The lower the value, the better, and as you can see, the CytoFLEX, with its APD detectors and DIY lasers easily beats the average across the board.  Of course I ran all the other common bead sets on this instrument.  Everything I threw at it, it handled with ease. 

8-Peak Resolution at low and high flow rates:

As you can see, resolving 8-peak beads is a cinch on the CytoFLEX pretty much across all channels. Even at the highest flow rate (240ul/min) the fluorescence resolution remains relatively unchanged, however the light scatter experiences some funky spread at the high flow rate.

APD Voltage Optimization:

Using a blank bead, the voltage was moved up and down the scale at appropriate intervals.  The rCV was calculated on the single bead peak in each of the fluorescence channels. Using a similar test as PMT optimization, I wanted to see if the APDs behaved in a similar way. It does appear that there is a sweet spot for APD voltages that vary across parameters.  This mimics PMT optimization profiles commonly seen before.


A pretty simple linearity test using PI stained CENs, and everything checks out as expected.  However, like I mentioned earlier, linearity on this instrument has a bigger role on this instrument than others.  With the CytExpert software, you can setup compensation at one set of voltages, change voltages (because it's a different cell line, or the sample is too bright, or other reason), and it will recalculate compensation based on the new voltages.  This may have been (or may still be) part of the FACSVerse software, but I've never used one of those, so I'm not sure about that.  Theoretically, then, you could create a set of comp tubes using non-tandem antibodies once, and then recall those comps each time, even if you've changed voltages. Anywho, linearity is great, it deviates from the theoretical line by less than 1% across the board.
PI Stained CENs comparing the theoretical line and the actual data

qNORM Resolution Comparison:

Without going to much into the methodology (because I've done it so many times before), what follows is a comparison of pretty much every instrument I've ever tested (grey boxes with quartile whiskers) with the CytoFLEX (blue circles) overlain.  This metric measures the number of bound antibodies that can be resolved from unstained lymphocytes. So, lower numbers equals better low-end resolution.  As you can see the CytoFLEX compares very well with all the best instruments out there. It definitely beats every instrument I own in the FITC, PE, PECy7, and APC channels.  In the PacBlue channel, it's about average.  I'm pretty sure this system can resolve pretty much any dim population you can throw at it.

Final Thoughts:

It's evident to me that the CytoFLEX would suit the needs of many demanding applications.  There's really no questioning its performance in terms of fluorescence detection. Light scatter resolution of cell populations wasn't as good as some of my better instruments. However, small particle detection, especially using 405nm side scatter is reported to give <200nm resolution.  I typically don't test small particle stuff since it's not really my thing. The fluidics seemed stable and robust for the time I had it.  I ran as many cell samples as I could to see if I could clog it up or make things look bad, and other than the high abort rates at high event rates, I saw no issues from a fluidics standpoint.  The software is fine for what it needs to do.  I'm so entrenched in doing analysis in FlowJo that I couldn't care less if there are histogram overlays or other fancy analysis-only plots in my acquisition software.  I just want it to be fast and simple to use. 

We also don't really know about the long term reliability of the hardware components. Sure everything works fine over the course of a month, but what about a couple years or more.  Will it have the staying power and uptime of a FACScan? This, I'm afraid, only time will tell.

But, I think the most important take home message here is that this instrument proves that flow cytometry hardware is absolutely a commodity in the eye of the consumer. As fancy as one wants to make hardware these days, no one is going to be impressed.  And the fact that hardware can be made cheaply reinforces this fact.  What this means is that we'll finally see a shift in focus away from over-engineered hardware to hardware that just works, but this time with a super slick user interface that people are attracted to.  The future is all about software and services, and I, for one, couldn't be happier!

Postscript: At the time of publishing, Beckman Coulter launched a new splash page with specs and a glimpse of the new exterior of the CytoFLEX.  You can reach that page here