30 October 2014

That's Just WAC

We are all about new methods here.  One method I have never been a big fan of is Weak Affinity Chromatography, which has been reviewed here, here, and here.  In this method, a target is immobilized on a column and compounds are flowed through and retarded if they interact. Affinity is determined by retention time, which we all know is never influenced by experimental conditions.  In this paper, the KDs determined by WAC are compared to those by SPR, as well as efficiency and consumption of materials.

As a model system they used alpha-thrombin, either in the active form or the covalently inactivated form.  Figure 1 shows the results for one compound. 
Figure 1.  A. WAC: tspec is difference in retention time for active (solid) and inactive (dashed) protein.  B. Triplicate response generated by subtracting in situ inhibited protein from active protein.  C. Isotherm from SPR. 
SPR required 120 minutes for triplicate data at 10 concentrations; WAC required 175 minutes.  WAC however can require up to 525 minutes for a single compound.  The authors note that Mass Spec detection can speed this up greatly by allowing multiplexing of ligands.  SPR requires more compound, but far less protein (25ug vs. 0.5mg).  As a counterpoint, they claim their thrombin column has a far longer lifetime than the thrombin chip.  In terms of robustness, WAC does not suffer from DMSO interference, while SPR can be mucked with by SPR, resulting in poor isotherm fits.  They noted this in 13 of 27 compounds they studied.  All 27 compounds were able to have isotherms generated by WAC.
 There is a high degree of correlation between the KDs determined by SPR and WAC.  The authors note one major factor is that similar conditions were used for both studies, while in the past this was not true.  In summary, SPR is faster, consumes less protein, and has higher throughput.  WAC uses less sample.  WAC, for future development, can use Mass Spec can give a advantage in terms of being able to multiplex compounds.  I am still not a fan of the method, but I want to give kudos to the authors who continue to develop the method. 

27 October 2014

Fragments vs Bcl-XL – selectively

One of the more popular applications of fragment-based lead discovery has been targeting protein-protein interactions. Among these, the BCL-2 family of anti-apoptotic proteins has proven particularly successful, as exemplified by several compounds in clinical trials against various cancers. SAR by NMR was used in the discovery of the chemical series that ultimately led to navitoclax, which inhibits both BCL-XL and BCL-2, and subsequent medicinal chemistry led to a selective inhibitor of BCL-2, which is now in phase 3 trials. In a recent paper in ACS Med. Chem. Lett., Zhi-Fu Tao and dozens of collaborators at AbbVie, Genentech, The Walter and Eliza Hall Institute of Medical Research (WEHI), and the University of Melbourne describe how fragments contributed to a specific inhibitor of BCL-XL.

High-throughput screening and medicinal chemistry at WEHI initially led to compound 1, a selective inhibitor of BCL-XL, which was optimized to the potent WEHI-539 in collaboration with Genentech. This molecule had some pharmacokinetic liabilities, so, in collaboration with AbbVie, the researchers came up with compound 4. To optimize this molecule further, they turned to SAR by NMR, which entails screening fragments in the presence of an initial binder to find fragments that bind to a second site.


The researchers used 2-dimensional NMR (1H–13C-HSQC) to screen a complex of BCL-XL and compound 1against 875 highly soluble fragments in pools of five, with each fragment present at 5 mM! Compound 6 was one of the better binders, and NMR-guided docking revealed that it bound close to compound 1, suggesting a fragment linking approach. This proved successful, though as with many linking studies it was critical to get the linker just right: shortening or lengthening the linker by a single methylene decreased affinity by more than two orders of magnitude (middle right).

Although remarkably potent, the best linked compound is also quite lipophilic (ClogP = 6.2), and adding human plasma to the assay caused a marked loss in potency, suggesting it gets sopped up by serum albumin. Previous work on navitoclax had shown that introducing a basic amino group could decrease binding to albumin, and doing this while simplifying the fragment ultimately led to A-1155463, which showed extraordinary biochemical and cell-based potency as well as on-target activity in mice.

This seems to be a classic case of fragment-assisted drug discovery, where fragments played a supporting role in a larger medicinal chemistry program rather than taking center stage. In this case one could argue that the role was relatively minor, and that it may have been possible to use conventional approaches to get from WEHI-539 to A-1155463. Nonetheless, the information provided by the fragments likely led to a fuller understanding of the binding pocket, and in difficult targets like this it is important to use all the tools at one’s disposal.

21 October 2014

Benchmark Your Process


So, not everybody agrees with me on what a fragment is.  As has been pointed out years ago, FBDD can be a FADD.  In this paper, from earlier this year, a group from AZ discusses how FBDD was implemented within the infectious disease group. Of course, because of the journal, it emphasizes how computational data is used, but you skim over that and still enjoy the paper :-). They break their process into several steps.
Hot Spots: This is a subject of much work, particularly from the in silico side.  In short, a small number of target residues provide the majority of energy for interaction with ligands.  Identifying these, especially for non-active site targets (read PPI), is highly enabling, for both FBDD and SBDD. To this end, the authors discuss various in silico approches to screening fragments.  They admit they are not as robust as would be desired (putting it kindly).  As I am wont to say, your computation is only as good as your experimental follow up.  The authors indicate that the results of virtual screens must be experimentally tested.  YAY!  They also state that NMR is the preferred method; 1D NMR in particular being the AZ preferred method.  [This is something (NMR as the first choice for screening) that I think has become true only recently.  Its something I have been saying for more than a decade, but I guarantee my cheerleading is not why.] They do note that of the two main ligand-based experiments, STD is far less sensitive than WaterLOGSY.  There is no citation, so I would like to put it out there, is this the general consensus of the community?  Has anyone presented data to this effect?  Specifically, they screen fragments 5-10 per pool with WaterLOGSY and relaxation-edited techniques.  2D screening is only done for small proteins (this is in Infection) and where a gram or more of protein is available.

Biophysics:  They have SPR, ITC, EPIC, MS, and X-ray.  They mention that SPR and MS require high protein concentrations to detect weak binders and thus are prone to artifacts.  They single out the EPIC instrument as being the highest throughput.  [As an aside, I have heard a lot of complaints about the EPIC and wonder if this machine is still the frontline machine at AZ.]  60% of targets they tried to immobilize were successful.  They also use "Inverse" SPR, putting the compounds down; the same technology NovAliX has in their Chemical Microarray SPR.  In their experience, 25% of these "Target Definition Compounds" still bind to their targets. 

They utilize a fragment-based crystallography proof of principle (fxPOP).  Substrate-like fragments (kinda like this?) are screened in the HTS, hits [not defined] are then soaked into the crystal system, and at least one structure of a fragment is solved.  This fragment is then used for in silico screening, pharmacophore models, and the like.  So, this would seem to indicate that crystals are required before FBDD starts.  They cite the Astex Pyramid where fragments of diverse shape are screened and the approach used at JnJ where they screen similar shaped fragments and use the electron density to design a second library to screen.

As I have always said, there are non-X-ray methods to obtain structural information.  AZ notes that SOS-NMR, INPHARMA, and iLOE are three ways.  These are three of the most resource intensive methods: SOS-NMR requires labeled protein (and not of the 15N kind), INPHARMA requires NOEs between weakly competitive ligands (and a boatload of computation), while iLOE requires NOEs of simultaneously binding ligands.  I think there are far better methods, read as requiring fewer resources, to give structural information more quickly (albeit at lower resolution).

The Library:  The describe in detail how they generated their fragment libraries.  They have a 20,000 fragment HCS library.  The only hard filter is to restrict HA less than 18.  I fully support that.  They also generated a 1200 fragment NMR library biased towards infection targets.

The Process:   The authors list three ways to tie these methods together:
  1. Chemical Biology: Exploration of binding sites/development of pharmacophores.  I would add that this is also for target validation.  As shown by Hajduk et al. and Edfeldt et al., fragment binding is highly correlated to advancement of the project. 
  2. Complementary to HTS.  At the conference I am at today, one speaker (from Pfizer) said that HTS was for selectivity, FBDD was for efficiency (or Lord, here comes Pete with that one).  I really like that approach.
  3. Lastly, stand alone hit generation.  
I think this paper is a nice reference for those looking to see how one company put their FBDD process in place. Not every company will do it the same, nor should they.  But there is a FBDD process for every company.

20 October 2014

Caveat emptor

Practical Fragments rarely has guest bloggers, but we do make exceptions in special cases. What follows is a (lightly edited) analysis from Darren Begley that appeared on the Emerald blog last year, but since the company's transformation to Beryllium it is impossible to find. This post emphasizes how important it is to carefully analyze commercial compounds. (–DAE)

In a LinkedIn Discussion post, Ben Davis posed the following question:

Do any of the commercially available fragment libraries come with reference 1D NMR spectra acquired in aqueous solution?

Most commercial vendors of fragments do not offer nuclear magnetic resonance (NMR) reference spectra with their compounds useful to fragment screeners; if anything, the experiment is conducted in 100% organic solvent, at room temperature, at relatively low magnetic field strength (DAE: though see here for an exception). The NMR spectra of fragments and other small molecules are greatly affected by solvents, and can vary from sample to sample. Different buffers, solvents, temperatures and magnetic field strengths can generate large spectral differences for the exact same compound. As a result, NMR reference spectra acquired for fragments in organic solvent cannot be used to design fragment mixtures, a common approach in NMR screening. Furthermore, solubility in organic solvent is no measure of solubility in the mostly aqueous buffer conditions typically used in NMR-based fragment screening.

At Emerald [now Beryllium], we routinely acquire NMR reference spectra for all our commercially-sourced fragment screening compounds as part of our quality control (QC) procedures. This is necessary to ensure the identity, the purity and the solubility of each fragment we use for screening campaigns. These data are further used to design cocktails of 9-10 fragments with minimal peak overlap for efficient STD-NMR screening in-house. 

Recently, we selected a random set of commercial fragment compounds, and closely examined those that failed our QC analysis. The most common reason for QC failure was insolubility (47%), followed by degradation or impurities (39%), and then spectral mismatch (17%) (Since compounds can acquire multiple QC designations, total incidences > 100%.) Less than 4% of all compounds assayed failed because they lacked requirements for NMR screening (that is, sufficiently distinct from solvent peaks or lack of non-exchangeable protons). Failure rates were as high as 33% per individual vendor, with an overall average of 16% (see Figure).



These results highlight the importance of implementing tight quality control measures for preliminary vetting of commercially-sourced materials, as well as maintaining and curating a fragment screening library. They also suggest that 10-15% of compounds will fail quality control, regardless of vendor. Do these numbers make sense to you? How do they measure up with your fragment library?

Let us know what you think. (–DB)

15 October 2014

When a Fragment is DEFINITELY not a Fragment

There are lots of papers that use "fragments" or "fragment approaches".  I find a lot of computational papers do this, is it because FBDD has won the field, or its sexy?  Well, in this paper the authors take an interesting spin on the term fragment. For many targets (particularly PPI), peptides are the only tool to assess binding, or the best binders.  However, despite a small vocal minority, I think most people don't consider peptides to be drugs, but instead good starting points.  The REPLACE (Replacement with Partial Ligand Alternatives through Computational Enrichment) method is used to identify fragments for the CDK2A system to identify fragment alternatives to N-terminal portions of the peptide and especially the crucial arginine residue.  As I say, repeatedly, Your Computation is only as good as your Experimental Follow up

This group took a very cautious approach to the initial modeling, understanding that PPIs are difficult to study via computational methods.  They used crystal structures of FLIPs (Fragment Ligated Inhibitory Peptides) and modeled in the compounds against subunit B and D.  Subunit B gave better results and so that was used for further modeling.  [I hate this kind of stuff, strikes me as wrong.]  After further work, they concluded that the modeling was validated and would be predictive for new compounds.  Then designed a library based on a pharmacophore model using scaffolds phenylacetate, five member heterocycles, and picolinates.  
Modeled Compounds.  Cyclin residues have three letter code, peptides one letter codes.  The solid lines show interactions between acidic cyclin D1 residues and the piperinzinylmethyl group of the inhibitor.
They then, bless their hearts, made some compounds. 
In the end, they showed that it is possible to turn peptides into small molecule-ish compounds.  Please note these activities are in millimolar!  So, even with the current debate as to what PPI fragments should look like, I find it very hard to believe that these molecules are in anyway fragments.  Grafting a fragment looking something onto a big something is not "Fragment based Discovery". 

13 October 2014

New poll: FBLD in academia

Since almost half our readers come from academia, we thought the following poll would be of interest. It is being conducted by Michelle Arkin of the University of California San Francisco, one of the powerhouses of FBLD, and should take just a couple minutes.

Please click here to answer four questions regarding your lab, whether you do FBLD, which fragment-finding techniques you use, and how you follow up on fragment hits. The results will be published in a forthcoming new edition of "Fragment-based Approaches in Drug Discovery."

Thanks!

08 October 2014

PPI Fragment Libraries...what do YOU think they should look like?

Dan and I are at the CHI Discovery on Target meeting in Boston.  We taught our award-winning course on FBDD yesterday, but with an emphasis on PPI.  And just in time, Andrew Turnbull, Susan Boyd, and Bjorn Walse  published a paper on PPIs and FBDD (it's open access[Ed:Link fixed]).  The paper is a nice review of PPIs in general (which have already been covered here): structural characteristics, the "hot spot", and computational approaches.  What I thought interesting was their discussion of the physico-chemical properties of PPI fragments.  This is an area were this is a lot of common knowledge, but nothing rigorously studied.  So, into the breach.

They discuss vendor supplied PPI libraries: Asinex, Otava (which does not seem to be a fragment library with a mean MW of~500),  and Life Chemicals.  PPI compounds are thought to need to be more 3D and obey the rule of 4: PPI compounds will tend to be larger and more lipophilic.  Does ontogeny recapitulate philogeny?  To explore this, they looked at 100 fragments orthosterically active against PPI targets (unpublished data) and compared to 100 fragments active against non-PPI targets. 
It appears that PPI fragments are a little larger and more lipophilic than "standard" fragments, but NOT any more 3D.  It should also be noted that the PPI fragments had double the acid and base containing fragments than "standard" fragments.  The authors agree that their dataset is small, and other groups are looking at larger datasets.  But, the conclusion they draw is that PPI fragments should be larger, more lipophilic, and contain at least one polar moiety. 

06 October 2014

Physical properties in drug design

This is the title of a magisterial review by Rob Young of GlaxoSmithKline in Top. Med. Chem. At 68 pages it is not a quick read, but it does provide ample evidence that physical properties are ignored at one’s peril. It also offers a robust defense of metrics such as ligand efficiency.

The monograph begins with a restatement of the problem of molecular obesity: the tendency of drug leads to be too lipophilic. I think everyone – even Pete Kenny – agrees that lipophilicity is a quality best served in moderation. After this introduction Young provides a thorough review of physical properties including lipophilicity/hydrophobicity, pKa, and solubility. This is a great resource for people new to the field or those looking for a refresher.

In particular, Young notes the challenges of actually measuring qualities such as lipophilicity. Most people use log P, the partition coefficient of a molecule between water and 1-octanol. However, it turns out that it is difficult to experimentally measure log P for highly lipophilic and/or insoluble compounds. Also, as Kenny has pointed out, the choice of octanol is somewhat arbitrary. Young argues that chromatographic methods for determining lipophilicity are operationally easier, more accurate, and more relevant. The idea is to measure the retention times of a series of compounds on a C-18 column eluted with buffer/acetonitrile at various pH conditions to generate “Chrom log D” values. Although a stickler could argue this relies on arbitrary choices (why acetonitrile? Why a C-18 column?) it seems like a reasonable approach for rapidly assessing lipophilicity.

Next, Young discusses the influence of aromatic ring count on various properties. Although the strength of the correlation between Fsp3 and solubility has been questioned, what’s not up for debate is the fact that the majority of approved oral drugs have 3 or fewer aromatic rings.

Given that 1) lipophilicity should be minimized and 2) most drugs contain at most just a few aromatic rings, researchers at GlaxoSmithKline came up with what they call the Property Forecast Index, or PFI:

PFI = (Chrom log D7.4) + (# of aromatic rings)

An examination of internal programs suggested that molecules with PFI > 7 were much more likely to be problematic in terms of solubility, promiscuity, and overall development. PFI looks particularly predictive of solubility, whereas there is no correlation between molecular weight and solubility. In fact, a study of 240 oral drugs (all with bioavailability > 30%) revealed that 89% of them have PFI < 7.

Young summarizes: the simple mantra should be to “escape from flatlands” in addition to minimising lipophilicity.

The next two sections discuss how the pharmacokinetic (PK) profile of a drug is affected by its physical properties. There is a nice summary of how various types of molecules are treated by relevant organs, plus a handy diagram of the human digestive track, complete with volumes, transit times, and pH values. There is also an extensive discussion of the correlation between physical properties and permeability, metabolism, hERG binding, promiscuity, serum albumin binding, and intrinsic clearance. The literature is sometimes contradictory (see for example the recent discussion here), but in general higher lipophilicity and more aromatic rings are deleterious. Overall, PFI seems to be a good predictor.

The work concludes with a discussion of various metrics, arguing that drugs tend to have better ligand efficiency (LE) and LLE values than other inhibitors for a given target. For example, in an analysis of 46 oral drugs against 25 targets, only 2.7% of non-kinase inhibitors have better LE and LLE values than the drugs (the value is 22% for kinases). Similarly, the three approved Factor Xa inhibitors have among the highest LLEAT values of any compounds reported.

Some of the criticism of metrics has focused on their arbitrary nature; for example, the choice of standard state. However, if metrics correlate with a molecule's potential to become a drug, it doesn’t really matter precisely how they are defined.

The first word in the name of this blog is Practical. The statistician George Box once wrote, “essentially, all models are wrong, but some are useful.” Young provides compelling arguments that accounting for physical properties – even with imperfect models and metrics – is both practical and useful.

Young says essentially this as one sentence in a caveat-filled paragraph:

The complex requirements for the discovery of an efficacious drug molecule mean that it is necessary to maintain activity during the optimisation of pharmacokinetics, pharmacodynamics and toxicology; these are all multi-factorial processes. It is thus perhaps unlikely that a simple correlation between properties might be established; good properties alone are not a guarantee of success and some effective drugs have what might be described as sub-optimal properties. However, it is clear that the chances of success are much greater with better physical properties (solubility, shape and lower lipophilicity). These principles are evident in both the broader analyses with attrition/progression as a marker and also in the particular risk/activity values in various developability screens.

In other words, metrics and rules should not be viewed as laws of nature, but they can be useful guidelines to control physical properties.

01 October 2014

Safran Zunft Challenge

Dan has already hit the highlights of FBLD 2014.  I won't do the lowlights.  They were few and far between.  I will try to give some flavor of the conference.  If you missed it, they t-shirts given out had this as a design
This was designed by Lukasz Skora of Novartis.  It is keywords used here at the blog, sized by frequency.  That is pretty cool.  It also gives us an idea of what we are really talking about here.  I have some other "flavor of Basel pictures" posted here.  

The conference was excellent, just the right size to allow people to interact at a high level.  The dinner was especially good for this (and the unending wine/beer didn't hurt!).   I have been lucky this year to be at many conferences with many different people.  Damian Young at the Baylor College of Medicine Center for Drug Discovery (and recently of the Broad) has been speaking at all of them about his Diversity Oriented Synthesis (DOS) approach to generating fragments.  Well, this has bothered me, DOS is not Fragments.  Am I some sort of Luddite?  Am I being too purist?  Could be.  

Well, an eminent group of FBLD-ers was gathered around a table during the conference dinner, including Justin Bower and Martin Drysdale of the Beatson, Chris Smith of Takeda, the aforementioned Dr. Young, myself, Terry Crawford of Genentech, and Beth Thomas of the CCDC.  So, out of this discussion, comes the Safran Zunft Challenge, administered by Dr. Bower.  I bet Damian that his molecules are too complex to be "fragments".  What will this mean?  I am betting that a "bad" interaction is worse than a good one, that is going all the way back to the Hann Model.  

So, this one molecule from Damian's presentation.  I have nothing against it per se, but for illustrative purposes.  I bet that his molecules will not have a LEAN (pIC50/HAC) >0.3.  [This is the metric I like, Pete.  I understand the limitations.]  By FBLD2016, Damian expects to have data on his molecules (and he is looking for partners).  If I lose, I owe the undersigned a beer.  Below we have preserved for posterity the discussion and those who were there (no hopping on the beer bandwagon late people!).  

I also think this is a good way for us to discuss the ontology of a "fragment"?  To me, its not just size, it is more of its "nature".  Fragments rely on simple molecules, adding complexity even with small molecules, strips away the "fragment-ness", IMNSHO