Monday, January 27, 2020

Plasmodium Falciparum Life Cycle

Plasmodium Falciparum Life Cycle Malaria is one of the world leading causes of death, especially among people living in sub-Sahara Africa and other tropical regions. Of the five species of the genus Plasmodium, the malaria protozoan parasite, known to infect Man, Plasmodium falciparum is responsible for the most virulent, severe and dangerous form of human malaria. Over the years, chemotherapy has played central role in the strategies towards the eradication of this disease. However, ability of P. falciparum to develop resistance to effective and affordable drugs and to pyrathroids, the active principle of the insecticides treated nets (ITNs) has made constant search for new pharmacotherapy imperative. This review presents an overview of the life cycle of the causative organism (P. falciparum), the efforts at controlling the disease and the molecular and cellular basis of the infection, with special emphasis on molecular chaperones of the heat shock proteins family as critical components of the parasite intra-erythr ocytic development and survival. The motivation for the present work is also presented. 1.1 Introduction Malaria, whose pathogen is transmitted by female Anopheles mosquitoes, is both preventable and curable, but yet still impacting negatively on the health of millions of people and account for high rate of mortality, especially among children in sub-Sahara Africa (Breman, 2001; Greenwood et al., 2008; Hay et al., 2004; Rowe et al., 2006; Snow et al., 2005). Five species of the genus Plasmodium, the protozoan parasites, responsible for malaria infection are known to infect humans. They are P. falciparum, P. vivax, P. malariae, P. ovale and P. knowlesi. It has been proposed that P. ovale consist of two species (Cox-Singh, 2010) and that zoonosis is the medium through which P. knowlesi infect humans (White, 2008). Of these, P. falciparum and P. vivax are numerically most important, with the former responsible for the most virulent, severe and dangerous form of human malaria (Greenwood et al., 2008). The World Health Organization (WHO) 2011 Malaria reports (WHO | World Malaria Report 2011, 2012) estimated a total of 216 million episodes of malaria in 2010 with at least 655 000 deaths, mostly in Africa and among children under the age of 5 years. Malaria was reported to be prevalent in 99 countries with an estimated 3.3 billion people at risk. Though, supports by international donors has led to rapid decrease in malaria mortality, especially among adults in Africa, Murray et al., (2012) contended that the malaria mortality burden may actually be larger than previously estimated and that for the necessary elimination and eradication to be achieved at larger scale, there is an urgent need for more supports. Factors, such as lack ofsanitation, malnutrition, lack or reduced access to medications, poverty and the location of many of the poor countries affected by malaria in the tropical zones, all combined together to create an enabling environment for the disease to thrive. Though pre ventive approaches such as good sanitation and distribution of insecticide treated nets (ITNs) (Curtis et al., 2006), have been employed as strategy towards the eradication of this disease, chemotherapy remains the most widely used approach. The ability of P. falciparum to develop resistance to effective and affordable drugs (Cheeseman et al., 2012; Jambou et al., 2005; Mller Hyde, 2010; Phyo et al., 2012) and to pyrethroids, the active principle of the insecticides treated nets (ITNs) (Fane et al., 2012; NGuessan et al., 2007) has made constant search for new pharmacotherapy imperative. Malaria parasite life cycle is a complex mechanism involving two hosts, human and female Anopheles mosquitoes. However, the clinical symptoms of the disease are associated with the invasion of the erythrocytes by the parasite, its growth, division inside the host cell and the cyclic cell lysis and reinvasion of new erythrocytes. The intra-erythrocytic survival and development of the parasite as well as the pathology of the infection are linked to structural and functional remodeling of the host cell through the export of parasite-encoded proteins (Botha et al., 2007; Miller et al., 2002; Pesce Blatch, 2009; Przyborski Lanzer, 2005). Meanwhile, attempts have been made to present an extensive description of the protein interaction network for P. falciparum (LaCount et al., 2005) and about 300 parasite-encoded proteins are predicted to be exported (Marti et al., 2004; Sargeant et al., 2006). Among the exported proteins are the molecular chaperones of the heat shock protein family (Nya lwidhe Lingelbach, 2006). Molecular chaperone are a family of proteins that function to stabilize proteins, facilitate their translocation across intracellular membranes, their degradation, and ensure that proteins in a cell are properly folded and functional (Hartl Hayer-Hartl, 2002; Hartl, 1996). PFA0660w belongs to an extended family of Hsp40 proteins predicted to be transported by the parasite into the host cell (Hiller et al., 2004; Marti et al., 2005; Sargeant et al., 2006). It is a Type II Hsp40 protein, said to be homologous to human DnaJB4, a cytosolic type II Hsp40, known to interact with human Hsp70 to facilitate protein folding, transport and assembly (Botha et al., 2007). Recent studies have localized PFA0660w into structures in the infected erythrocyte, called the J-dots (Kulzer et al., 2010); said to be exported in complex with P. falciparum Hsp70-x (PfHsp70-x) into the J dots (Kulzer et al., 2012) and failure to obtain a viable PFA0660w-knocked-out parasite (Maier et al., 2008), suggests that it may be essential for the survival of the parasite in the infected erythrocytes and therefore a potential target for drug action. 1.2 Malaria Infection 1.2.1 Background information Malaria, though curable and preventable, remains a life-threatening disease that was noted in more than 4,000 years ago and has being responsible for millions of death. World Health Organization (WHO) listed malaria among the most important infectious diseases of the tropics and form part of the sixth millennium development goal (MDG 6) (WHO | MDG 6: combat HIV/AIDS, malaria and other diseases, 2012). The target 6C of MDG 6 is to bring malaria and other major diseases to a halt by 2015 and begin to reverse their incidences. Strategies advocated by WHO to combat malaria includes prevention with the use of long-lasting insecticides treated bed-nets (ITNs) and indoor residual spraying, and rapid treatment with effective anti-malarial medicines with special focus on pregnant women and young children. WHO Roll Back Malaria further recommends that to control Plasmodium falciparum malaria during pregnancy, in addition to individual protection with ITNs and prompt management of anaemia and m alaria using effective anti-malaria drugs, intermittent preventive treatment (IPTp) or chemoprophylaxis should be encouraged (WHO | Malaria in pregnancy, 2012). Though, the World Health Organization (WHO) 2011 Malaria reports (WHO | World Malaria Report 2011, 2012) estimated at least 655 000 deaths as a result of malaria infection, mostly in Africa and among children under the age of 5 years, the mortality burden may actually be larger than previously estimated, thus, the need for improved supports by the funding organization to be able to achieve the much needed malaria elimination and eradication (Murray et al., 2012). Malaria is caused by the transmission of parasites to humans by female Anopheles mosquitoes during a blood meal. Plasmodium falciparum is known to be responsible for high rate of mortality, especially among children in sub-Saharan Africa, mostly under age 5 years (Breman, 2001; Greenwood et al., 2008; Hay et al., 2004; Rowe et al., 2006; Snow et al., 2005). Apart from the fact that many of the countries that are mostly affected are located in the tropical region of the world, increasing level of poverty, with its attendant economic consequences, coupled with lack of or improper sanitation and reduced access to prompt medication are factors that are creating enabling environment for the disease to thrive. Though preventive approaches such as the use of insecticide treated bed nets, IPTp and chemoprophylaxis with good sanitation (Curtis et al., 2006; WHO | Malaria in pregnancy, 2012), have been employed as strategy towards the eradication of this disease, the use of chemotherapeutic drugs remains the most widely used approach (Butler et al., 2010; DAlessandro, 2009). However, the success of this strategy has been hampered by the resilient of the parasite in continually creating resistance to the available drugs. The ability of P. falciparum to develop resistance to effective and affordable drugs (Cheeseman et al., 2012; Jambou et al., 2005; Mller Hyde, 2010; Phyo et al., 2012) and to pyrethroids, the active principle of the insecticides treated nets (ITNs) (Fane et al., 2012; NGuessan et al., 2007), has made constant search for new pharmacotherapy imperative. However, notwithstanding the centrality of chemoprophylaxis and chemotherapy in efforts at combating the menace of malaria infe ction (DAlessandro, 2009), and wide distribution of insecticide-impregnated bed nets, efforts aimed at enhancing long lasting protective immunity through vaccination, of which RTS,S is emerging as most promising vaccine formulation, have also been intensified (Ballou, 2009; Casares et al., 2010). 1.2.2 Life Cycle of Plasmodium falciparum Malaria parasite life cycle (Figure 1.1) is a complex mechanism involving two hosts, human and female Anopheles mosquitoes. The survival of the parasite during several stages of its development depends on its ability to invade and grow within multiple cell types and to evade host immune responses by using their specialized proteins (Florens et al., 2002; Greenwood et al., 2008). Sporozoites (infective stage), merozoites (erythrocytes invading stage), trophozoites (multiplying form in erythrocytes), and gametocytes (sexual stages) are stages involved in the development of the parasite. These stages are unique in shapes, structures and complementary proteins. The continuous changes in surface proteins and metabolic pathways during these stages help the parasites to survive the host immune response and create challenges for drugs and vaccines development (Florens et al., 2002). The sporogony or sexual phase occurs in mosquitoes, resulting in the development of numerous infective forms of the parasites which when ingested by human host induced disease. During a blood meal by female Anopheles mosquitoes from an individual infected with malaria, the male and female gametocytes of the parasite enter into the gut of the mosquito, adjust itself to the insect host environment and initiate the sporogonic cycle. The fusion of male and female gametes produced zygotes, which subsequently develop into actively moving ookinetes that pierced into the mosquito midgut wall to develop into oocysts. Each oocyst divides to produce numerous active haploid forms called sporozoites which are subsequently released into the mosquitos body cavity following the burst of the oocyst. The released sporozoites travel to and invade the mosquito salivary glands, from where they get injected into the human bloodstream during another blood meal, causing malaria infection (Barillas-Mury Kum ar, 2005; Ferguson Read, 2004; Hill, 2006). The parasite life cycle traverse two hosts (Man and Mosquito) with each stage involving complex cellular and molecular modifications. To prevent blood clots, Sporozoites infected saliva are deposited into Man during blood meal by female Anopheles mosquitoes, make their way to the liver, develop over time into hypnozoites (dormant stage, usually responsible for relapse of infection) or merozoites (that are released into blood stream to invade erythrocytes). The clinical symptoms of the disease are associated with the invasion of the erythrocytes by the parasite, its growth, division inside the host cell and the cyclic cell lysis and reinvasion of new erythrocytes. The schizogony or asexual phase of the life cycle occurs in human host. The cycle is initiated from the liver by the ingested sporozoites and later continues within the red blood cells, resulting in the clinical manifestations of the malaria disease. Following the introduction of invasive sporozoites into the skin after mosquito bite, they are either destroyed by macrophages, enter the lymphatics and drain into the lymph nodes from where they can develop into exoerythrocytic stages (Vaughan et al., 2008) and prime the T cells as a way of mounting protective immune response (Good Doolan, 2007) and/or blood vessel (Silvie et al., 2008b; Vaughan et al., 2008; Yamauchi et al., 2007), from where they made their way into the liver. While in the liver, sporozoites negotiate through the liver sinusoids, entered into hepatocytes, followed by multiplication and growth in parasitophorous vacuoles into schizonts, each of which contains thousands of merozoites, especially with P. falciparum (Ami no et al., 2006; Jones Good, 2006; Kebaier et al., 2009). Thrombospondin-related anonymous protein (TRAP) family and an actinmyosin motor has been show to help sporozoites in its continuous sequence of stick-and-slip motility (Baum et al., 2006; Mnter et al., 2009; Yamauchi et al., 2007) and that it growth and development within the liver cells is facilitated by the circumsporozoite protein of the parasite (Prudncio et al., 2006; Singh et al., 2007). This stick and slip motility prevent the parasite from been washed away by the circulating blood into kidney from where they can be destroyed and removed from the body. Motility is driven by an actin-myosin motor located underneath the plasma membrane. The sporozoite journey is propelled by a unique actin-myosin system, which allows extracellular migration, cell traversal and cell invasion (Kappe et al., 2004).This is a single cycle phase with no clinical symptoms, unlike the erythrocytic stage, which occurs repeatedly and characterize d with clinical manifestation. The hepatocytic merozoites are stored in vesicles called merosomes where they are protected from the phagocytotic action of Kupffer cells. The release of these merozoites into the blood stream via the lung capillaries initiates the blood stage of the infection (Silvie et al., 2008b). In some cases (as it can be found with P. vivax and P. ovale malaria) dormant sporozoites, called hypnozoites, are formed and remain in the liver for a long time. These hypnozoites are usually responsible for the development of relapse of clinical malaria infection and has been reported to be genotypically different from the infective sporozoites ingested after a mosquito bite (Cogswell, 1992; Collins, 2007). The development of the parasite within the red blood cells occur with precise cyclic accuracy with each repeated cycles producing hundreds of daughter cells that subsequently invades more red blood cells. The clinical symptoms of the disease are associated with the invasion of the erythrocytes by th e parasite, its growth, division inside the host cell and the cyclic cell lysis and reinvasion of new erythrocytes. The invasion of RBCs by the merozoites takes place within seconds and made possible by series of receptorligand interactions. The ability of the merozoites to quickly disappear from circulation into the RBCs protect its surface antigens from exposure to the host immune response (Cowman Crabb, 2006; Greenwood et al., 2008; Silvie et al., 2008b). Unlike P. Vivax, which invade the RBCs by binding to Duffy blood group, the more virulent P. falciparum possess varieties of Duffy binding-like (DBL) homologous proteins and the reticulocyte binding-like homologous proteins that allows it to recognize and bind effectively to different RBC receptors (Mayer et al., 2009; Weatherall et al., 2002). Micronemes, rhoptries, and dense granules are the specialized apical secretory organelles of the merozoite that help the merozoites to attach, invade, and establish itself in the red cel l. The successful formation of stable parasitehost cell junction is followed by entering into the cells through the erythrocyte bilayer. This entrance is made possible with the aid of the actinmyosin motor, proteins of the thrombospondin-related anonymous protein family (TRAP) and aldolase, leading to the creation of a parasitophorous vacuole, that isolate the intracellular ring parasite from the host-cell cytoplasm, thereby creating a conducive environment for its development (Bosch et al., 2007; Cowman Crabb, 2006; Haldar Mohandas, 2007). The intra-erythrocytic parasite is faced with the challenge of surviving in an environment devoid of standard biosynthetic pathways and intracellular organelles in the red cells. This challenge is overcome by the ability of the parasite to adjust its nutritional requirement to haemoglobin only, formation of a tubovesicular network, thereby expanding it surface area and by export of a range of remodeling and virulence factors into the red cell (Silvie et al., 2008b). Following the ingestion of the hemoglobin into the food vacuole, it is degraded to make available the amino acids for protein biosynthesis. Heme is a toxic free radical capable of destroying the parasite within the red blood cells. Heme polymerase is used by the parasite for the detoxification of heme and the resulting hemozoin is sequestrated as hemozoin. As the parasite grows and multiplies, new permeation pathways are created in the host cell membrane to help in the uptake of solutes from the extracellular medium, disp osal of metabolic wastes, and in initiating and sustaining electrochemical ion gradients, thereby preserving the osmotic stability of the infected red cells and thus, premature hemolysis (Kirk, 2001; Lew et al., 2003). 1.2.3 Control of Malaria infection Preventive measures are a critical step towards the control and eradication of malaria. Preventive approach can broadly be divided into two Infection control and Vector control. Infection control focuses on preventing the development of the disease as a result of occasional mosquito bite or relapse of previous infection (Lell et al., 2000; Walsh et al., 1999b). This involves the use of chemoprophylaxis. Travellers to malaria endemic countries are expected to start prophylaxis at least two weeks before and to continue up to two weeks after. One important target group in the infection control using chemoprophylaxis are the pregnant women. Intermittent preventive treatment for pregnant women (IPTp) is the globally acknowledge approach for prevention of malaria in pregnancy (Vallely et al., 2007; WHO | Malaria in pregnancy, 2012). Sulphadoxin-pyrimethamine (SP) has been used for this purpose and there are compelling arguments for the use of artesunate-SP (Jansen, 2011). To ensure long lasting prevention, this approach should be combined with vector control. Vector control focuses on protecting against mosquitoes bite, thereby preventing the transmission of the parasite to Man. Strategies for vector control include the use of residual spraying of insecticides, insect repellent cream or spray, sleeping under bed nets, especially, the insecticide impregnated bed nets (ITNs) and proper sanitation (Curtis et al., 2006; Lavialle-Defaix et al., 2011; WHO | Insecticide-treated materials,). WHO provides guideline for the production, preparation, distribution and the use of the ITNs (WHO | Insecticide-treated materials,). With the reported resistance to pyrathroids, an active principle of the insecticides treated bed nets (Fane et al., 2012; NGuessan et al., 2007), all strategies involving the use of chemical agents, also faces the global challenge of developing resistance. Training in proper sanitation and its sustainability from generation to generation is most probably the best approach in controlling the malaria disease. Personal and general hygiene which involve in-door and out-door cleaning, good refuse disposal practices, eradication of stagnant water, proper sewage disposal and clean, dry and uninterrupted drainages are examples of good sanitation practices that will not only prevent malaria infection, but also other killer diseases of the tropics. Sanitation is not only cheap and affordable; it is within the reach of everybody. 1.2.3.2 Malaria Chemotherapy Despite the use of preventive approaches outlined above (Curtis et al., 2006; WHO | Malaria in pregnancy, 2012), as strategy towards the eradication of this malaria, the use of chemotherapeutic drugs remains the most widely used approach (Butler et al., 2010; DAlessandro, 2009). They are widely employed as prophylaxis, suppressive and curative. However, the success of this strategy has been hampered by the resilient of the parasite in continually creating resistance to the available drugs. The ability of P. falciparum to develop resistance to effective and affordable drugs (Cheeseman et al., 2012; Jambou et al., 2005; Mller Hyde, 2010; Phyo et al., 2012) and to pyrethroids, the active principle of the insecticides treated nets (ITNs) (Fane et al., 2012; NGuessan et al., 2007), has made constant search for new pharmacotherapy imperative. Various approaches have been employed to identify new antimalaria agents with a view to reducing cost, ensuring availability and reducing the incidences of resistance (Rosenthal, 2003). Chemical modification of the existing antimalarial is a simple approach and required no extensive knowledge of the mechanism of drug action and the biology of the infection. Many drugs in use today have been produced using this approach, including chloroquine, primaquine and mefloquine from quinine (Stocks et al., 2001), 8-aminoquinoline, tafenoquine, from primaquine (Walsh et al., 1999a) and lumefantrine from halofantrine (van Vugt et al., 2000). Another approach is the use of plant derived compound with little or no chemical modification has led to the discovery of potent antimalarial such as artemisinins (Meshnick, 2001). Also, the use of other agents not originally designed for malaria such as folate antagonists, tetracyclines and antibiotics that were reported to be active against malaria parasit es (Clough Wilson, 2001) is another viable approach to drug discovery. Resistance reversals such verapamil, desipramine and trifluoperazine (van Schalkwyk et al., 2001) have also been used in combination with antimalaria drugs to improve therapy. Optimization of therapy with existing antimalaria agents is widely used as a productive approach towards improving therapy. Optimization of therapy underscore the need for combination therapy with newer and older drugs and with agents that are not original designed as antimalaria but which can potentiate the antimalaria property and/or block resistance to antimalaria agents. Thus for the combination to be ideal, it should improve antimalarial efficacy, providing additive or synergistic antiparasitic activity and slow the progression of parasite resistance to the antimalaria agents. For example, combination of artesunate with sulfadoxine/pyrimethamine (von Seidlein et al., 2000) or with amodiaquine (Adjuik et al., 2002), if devoid of underlying resistance to the artesunate partners which can lead to high rates recrudescence (Dorsey et al., 2002), may prove to be optimal antimalarial agents. Other combinations that have been effectively used include artesunate and mefloquine (Price et al., 1997) and artemether and lumefantrine (Lefevre et al., 2001). The combination of analog of proguanil (chlorproguanil) with dihydropteroate synthase (DHPS) inhibitor (dapsone), originally produced to treat leprosy (Mutabingwa et al., 2001) has open up a new and effective approach in antimalaria drug therapy. The use of dapsone and other drug resistance reversers such as verapamil, desipramine, trifluoperazine (van Schalkwyk et al., 2001) and Chlorpheniramine (Sowunmi et al., 1997) has shown potential for reducing the rate of drug resistance. Table 1.1: Classes and Mechanism of Antimalarial drugs CLASSES OF DRUGS Gametocidal Tafenoquine Gamatocidal Gametocidal biguanides (proguanil, cycloguanil), Trimethoprim CLASSES OF DRUGS Clindamycin, Spiramycin m ubiquinol to cytochrome C (Vaidya, 2001). Meanwhile, one important and innovative approach towards drug discovery in malaria chemotherapy is the search for new antimalaria drug target. Such targets include parasite membrane (Vial Calas, 2001), food vacuole (Banerjee et al., 2002), mitochondrial and apicoplast (Ralph et al., 2001; Vaidya, 2001). The cytosol, which is the centre of metabolic activities (e.g. folate metabolism and glycolysis) and enzymes activities have proven to be valuable as potential target for drug action (Plowe, 2001; Razakantoanina et al., 2000). To survive and develop within the erythrocytes, Plasmodium falciparum export most of its virulent factors into the cytosol of the infected erythrocytes. Among these are the molecular chaperones of the heat shock proteins which are focus of many researches and are increasingly gaining ground as potential target of drug action (Behr et al., 1992; Kumar et al., 1990). 1.2.3.3 Malaria Vaccines Notwithstanding the centrality of chemoprophylaxis and chemotherapy in efforts at combating the menace of malaria infection (DAlessandro, 2009), and wide distribution of insecticide-impregnated bed nets, efforts aimed at enhancing long lasting protective immunity through vaccination, of which RTS,S is emerging as most promising vaccine formulation, has been intensified (Ballou, 2009; Casares et al., 2010). These attempts at producing an effective vaccine against malaria infection has, however, for many years proved unsuccessful (Andr, 2003; Artavanis-Tsakonas et al., 2003). Having a vaccine that can completely block transmission from human to mosquito host can be a major limp towards global eradication of malaria. But, the absences of such immunity may explain the possible partnership between the parasite and the host, developed over a long time of co-habitation (Evans Wellems, 2002). On the other hand, a vaccine developed in line with the model of naturally acquired immunity that o ffers protections against morbidity and mortality, offers more encouragement. Such a vaccine will be a major step in the right direction and may not require regular booster vaccination like it would with vaccine that target infection transmission blockage (Struik Riley, 2004). Meanwhile, the development of natural immunity, after a long term exposure to the infection, especially with people living in the endemic areas has been reported (Baird, 1995; Hoffman et al., 1987; Rogier et al., 1996). The rate of acquired immunity in infants is faster than older children, but they also stand the chance of higher risk of developing severe malaria infection and anaemia (Aponte et al., 2007). Though, adults who, having obtained naturally acquired immunity, migrated to malaria-free zones, stands the risk of contacting the diseases upon return to their endemic region, documentary evidences however revealed that their responses to such re-infection are very rapid and tend to respond to treatment and recover faster than those who have not been previously xposed. (Di Perri G et al., 1994; Jelinek et al., 2002; Lepers et al., 1988). While this naturally acquired immunity is beneficial, it leaves the most vulnerable population (children and pregnant women though the mother may be immune, the foetus is not) at risk, as they are yet to gain enough exposure for such immunity to take place. Aponte et al., (2007) also showed that a reduced exposure toP. falciparumantigens through chemoprophylaxis early in life have the potential to delay immunity acquisition. Furthermore, it does not appear that naturally acquired immunity have any effect on transmission of malaria. This further explained the possibility of an evolving host-parasite relationship (Evans Wellems, 2002), which might have been developed over a long time host-parasite co-evolution. Therefore, understanding the compromises that may have developed over time between the parasite and the host may be an important approach towards developing a much needed vaccine. 1.3 Molecular and Cellular Basis of Malaria Infection Following blood meal by an Anopheles female mosquito accompany with the release of saliva to prevent blood coagulation (Beier, 1998), malaria parasites are deposited or ejection of into the skin (Frischknecht et al., 2004; Vanderberg Frevert, 2004). By continuous gliding in the skin, the sporozoite reach a blood vessel, breach the endothelial barrier and enter the blood circulation (Amino et al., 2007; Vanderberg Frevert, 2004) and/or breach a lymphatic vessel to enter the draining lymph node, where exoerythrocytic stages of sporozoites development may take place (Amino et al., 2006). A micronemal protein, called thrombospondin-related anonymous protein (TRAP), has been shown to be responsible for the gliding motility and invasion mosquito vector salivary gland and in mammalian host (Kappe et al., 1999). The sporozoite transversal to the liver and the merozoites invasion and remodeling of the host cells are complex but necessary processes for the survival and development of the par asite. 1.3.1 Cell Transversal Sporozoite possesses the ability to transverse cells i.e move in and out of the host cells by membrane disruption (Mota et al., 2002, 2001; Vanderberg Stewart, 1990). Among the proteins secreted by the micronemes that have been implicated in host cell traversal are SPECT1 (sporozoite microneme proteinessential forcelltraversal 1) and SPECT2 (Ishino et al., 2005, 2004). The absence of SPECT1orSPECT2in mutant sporozoite does not prevent gliding motility but prevent migration through host cells (Ishino et al., 2004).Other proteins of importance to sporozoite cell traversal prior to hepatocyte infection, includes TRAP-Like Protein (Moreira et al., 2008), a sporozoite secreted phospholipase (Bhanot et al., 2005), and celltraversal protein forookinete andsporozoite (Kariu et al., 2006). Similarly, the circumsporozoite protein (CSP) probably plays a role in targeting sporozoites to hepatocytes by interacting with heparin sulfate proteoglycans (Sinnis Sim, 1997). 1.3.2 Liver stage development Upon entering the bloodstream, infectious sporozoite makes it way to the liver. Circumsporozoite protein (CSP) is highly expressed at this stage of the parasite life cycle. Using the sporozoites that expresses fluorescent proteins under the control of CSP and intravital imaging, Frevert and colleagues were able to show the movement of sporozoites in the liver (Frevert et al., 2005). The study showed that sporozoite migrates through several hepatocytes before finally settling in one, form PV and begin the liver stage development. CSP, mediated by low-density lipoprotein receptor-related protein LRP-1, and other highly expressed proteins by Kupffer cells, play an important role in inhibiting the generation of reactive oxygen species via the generation of cyclic AMP (cAMP) which stimulates adenyl cyclase activity (Usynin et al., 2007). Ishino and co-workers reported that two parasite molecules P36 and P52/P36p are involved in sporozoite invasion of hepatocytes with the formation of a PV membrane (PVM) (Ishino et al., 2005). Apart from CSP, other gene product that has been implicated in liver stage development of the parasite includes sporozoite low complexity asparagine-rich protein (SAP1) (Aly et al., 2008) and sporozoite and liver stage asparagine-rich protein (SLARP) (Silvie et al., 2008a) 1.3.3 Erythrocyte Invasion Erythrocyte invasion involves four steps, namely, initial merozoites binding, reorientation and erythrocyte deformation, specific interaction and junction formation and parasite entry (Figure 1.2). Merozoite surface protein-1 (MSP-1) is a well characterized merozoite surface proteins implicated in initial merozoite binding. It has been reported to be uniformly distributed over the merozo

Sunday, January 19, 2020

The History of Art Essay example -- essays research papers

The History of Art   Ã‚  Ã‚  Ã‚  Ã‚  When we think of history we don’t often think of art. We don’t realize how the history of art can help us learn more about the people, the cultures, and the belief systems of those who lived hundreds and thousands of years before us. Art has developed, influenced, and contributed starting from the great Stone Age to the present day. Art gives an insight into the changes and evolution that man and culture have gone through to become what is today. Art is culture, art is the essence of the people who make it and the best way to appreciate art is to look at the history of it and it’s evolvement through time.   Ã‚  Ã‚  Ã‚  Ã‚  The Great Ages consists of four distinct ages: The Old Stone Age, The New Stone Age, The Bronze Age, and The Iron Age. These four Great Ages is the complete history of art from the beginning to the present day. Each age is named characteristically for the type of material used for that time. Stone was used in the Old and New Stone age, bronze in the Bronze Age, and iron in the Iron Age.   Ã‚  Ã‚  Ã‚  Ã‚  The Great Ages began with The Old Stone Age starting at 100,000 BCE. The people lived in tribes and clans and often moved from place to place, hunting and gathering to live. They believed all life was sacred and all beings were divine, including animals. The tribal teachings taught that man and nature are one. Hunting and gathering was a sacred ritual because they would often believe they were at one with the animal being hunted. Shamens and shamenesses, spiritual healers and seers between the people and spirits of animals, would often lead hunts and call forth the spirit of the animal to which they would ask the animal to offer their life willingly for a successful hunt. An illustration in Art Through The Ages, 1-4, (Hall of the Bulls found in Lasacux, c 15,000-13,000 b.c. Largest bull approx. 11’6† long) a beautiful cave painting of Bulls. It shows how sacred these animals were to the people. The painter took the time not only to paint such a true to nature image but also purposely put it in a remote location hundreds of feet above the entrance. The location of the painting suggest that it was used as a spiritual image that perhaps shamans would use to communicate with the spirit of the animal.   Ã‚  Ã‚  Ã‚  Ã‚  The Shamans were necessary to the t... ...ng alongside the edifice and stained glass windows that were mystically illuminated with the sun’s rays. 13-29 (Interior of Ste.-Chapelle), 13-33 (St. Martin, St. Jerome, and St Gregory, c. 1220-1230, from the Porch of the Confessors, Chartres Cathedral France.). The beginning of the Renaissance around 1500 CE is considered the start of the Late Iron Age, which is still on going. The Renaissance was the age of enlightenment the rebirth of learning and culture where men were going beyond their ability, where artists were considered geniuses, and private pleasure became the subject of art. Great artists like Leonardo de Vinci, Raphael, and Titian emerged from the great period of the Renaissance; they were not only geniuses, but also great individual intellects, who defined the greatness of art. Individualism still prevails today and is the very core of modern society. Male-dominated societies still exist, but slowly the demand for equality is changing that. During the Four Great Ages, many things have changed, many things have been lost, but time has not taken a sudden halt, nor the art; people, cultures, and mentalities continue to grow and change, and from growth comes greatness.

Saturday, January 11, 2020

Effects of British Colonial Rule in India Essay

The colonization of India and the immense transfer of wealth that moved from the latter to Britain were vital to the success of the British Empire. In fact, the Viceroy of British India in 1894 called India â€Å"the pivot of our Empire †¦Ã¢â‚¬  I examine the effects of the Industrial Revolution on the subcontinent. Besides highlighting the fact that without cheap labor and raw materials from India, the modernization of Britain during this era would have been highly unlikely, I will show how colonial policy led to the privation and death of millions of natives. I conclude that while India undoubtedly benefited from British colonial rule, the negatives for the subject population far outweighed the positives. . Colonialism, by definition, is exploitative and oppressive, with the rulers enriching themselves at the expense of those they rule. Generally speaking, colonizers dominate a territory’s resources, labor force, and markets; oftentimes, they impose structures — cultural, religious and/or linguistic — to maintain control over the indigenous population. The effects of the expansion of European empires, which began in the 15th century, on the colonized can still be felt today. Some historians, for example, argue that colonialism is one of the leading causes in income inequality among countries in present times. They cite patterns of European settlement as determinative forces in the type of institutions developed in colonized countries, considering them major factors in economic backwardness. Economist Luis Angeles has argued that the higher the percentage of Europeans settling in a colony at its peak, the greater the inequality in that country so long as the settlers remained a minority, suggesting that the colonizers drained those lands of essential resources while reaping most, if not all, of the profits. In terms of per capita GDP in 1995, the 20 poorest countries were all former colonies, which would seem to bolster Angeles’ contention. There are, however, competing views on how much underdevelopment in today’s poorest countries is a byproduct of colonial rule and how much of it is influenced by factors such as a country’s lack of natural resources or area characteristics. For poet, activist and politician Aime Cesaire, the verdict was in: Colonizers were â€Å"the decisive actors †¦ the adventurer and the pirate, the wholesale grocer and the ship owner, the gold digger and the merchant, appetite and force, and behind them, the baleful projected shadow of a form of civilization which, at a certain point in its history, finds itself obliged, for internal reasons, to extend to a world scale the competition of its antagonistic economies. This is not to suggest that Western European nations were the first and only countries to pursue imperialistic policies or that nothing good came out of colonial policies for the subject population. Dinesh D’Souza, while arguing that colonialism has left many positive as well as negative legacies, has stressed that there is nothing uniquely Western about colonialism, writing: â€Å"Those who identify colonialism and empire only with the West either have no sense of history or have forgotten about the Egyptian empire, the Persian empire, the Macedonian empire, the Islamic empire, the Mongol empire, the Chinese empire, and the Aztec and Inca empires in the Americas. † For this paper’s purposes, however, I will focus on the British Empire, its colonizing efforts in India (1757-1947), and the effects British policy had on that subject population. A couple of caveats before examining the British-Indian relationship: experiences differed from colony to colony during this period of European imperialism; India was unique in the colonial experience because of its size and history. It also should be noted that India was rather unique among colonized lands during this era for at least two reasons. First, South Asia was â€Å"already a major player in world commerce and possessed a well-developed trading and financial world† by the time Europeans arrived. Indigenous administrative structures already existed for taxation purposes, while commerce within the country and throughout the continent offered prospects of giant profits. Second, British India, which included today’s India, Pakistan and Bangladesh, was a region so large that there were areas in which Britain exercised direct control over the subject population and others where it exerted indirect control. It is exceedingly difficult, therefore, to extrapolate from one experience to another. Although it is impossible to determine how India would have developed had England never established a dominating presence there, I find the results of British colonialism to have been a mixed bag for India: the negatives, however, far outweighed the positives. Liberal and democratic aspects of British colonialism in India played a significant role in leading to a democratic South Asia following Indian independence in 1947. Yet, the British — first through the East India Company and then through direct government control — held almost all of the political and economic power in India during the Empire’s expansion and apogee, guaranteeing the Indian economy could not evolve and/or function independent of the ruling power’s control; ensuring raw materials extracted from Indian soil would go towards British manufacturing industries mostly without profiting the vast majority of Indians; and leading to lives of privation for millions of indigenous subjects. Although there have been arguments made that, in political and economic terms, south Asia was backwards until the arrival of Europeans, recent research has debunked that myth, showing the region to have possessed healthy trading and financial structures prior to the Europeans’ arrival. British Colonial Strategy in the Subcontinent Imperial powers followed two basic strategies when colonizing. They either allowed a large number of Europeans to settle overseas (known as Settler Colonies) or sent a much smaller number – usually less than 1 percent of the population — to serve as administrators and tax collectors (known as Peasant Colonies). Britain followed the latter strategy in regards to India. The percentage of English people in India in 1913, for example, was only 0. 1 percent of the country’s population; by comparison, they accounted for over one-fifth (21. 4 percent) of the population in South Africa and Losetho during the same period. As previously mentioned, Britain exerted both direct and indirect control over the Indian subcontinent. Areas of indirect control are called â€Å"native states. These were controlled by Indian rulers who wielded considerable power over the internal administration of the land, while the British exercised complete control over the area’s defense and foreign policies. When looking at this two-pronged approach Britain took in establishing an Indian colony, the economist Lakshmi Iyer has argued that there is a differential long-term effect on areas the Empire controlled directly compared to areas in which it basically outsourced control. Rather than expropriating Indian land, which was negligible, the English taxed Indian land, producing considerable revenues and inducing the indigenous population to shift from traditional to commercial products (e. g. tea). Areas that were directly under British control today have significantly lower levels of public goods relative to areas that were not under direct colonial rule. In 1961, for example, districts (administrative divisions below state level) that had been under direct control of the British Empire had lower levels of primary and middle schools, as well as medical dispensaries. Present-day differences between directly and indirectly controlled areas, Iyer argues, are most likely the result of differences in internal administration during the colonial period because once the British left in 1947, all the native states were integrated into independent India and have since been subject to a uniform administrative, legal and political structure. The Company and the Crown By the middle of the 18th century, there were five major European colonial powers — the Dutch Republic, France, Great Britain, Portugal, and Spain. From about 1850 on, however, Britain’s overseas empire would be unrivaled; by 1901, the empire would encompass 11. 2 million square miles and rule about 400 million people. For much of the 19th and 20th centuries, India was Britain’s largest and economically most important colony, an â€Å"empire within an empire. † It should be noted that although this period coincided with the birth of the Industrial Revolution historians and economists have cast doubt on whether industrialization was the sine qua non for British imperialism. They have noted that England’s first major advance into the Indian subcontinent began in Bengal in the middle of the 18th century, long before large-scale mechanization turned Britain into the â€Å"workshop of the world. † Historian P. J. Marshall, in studying early British imperialism, has written: â€Å"As a blanket term the Industrial Revolution explains relatively little about British expansion in general at the end of the eighteenth century. † While Marshall and others may be correct in asserting the British would have pursued empire even without the Industrial Revolution, its advent impacted colonial policy in that it required expanded markets and a steady supply of raw materials to feed the country’s manufacturing industries. Cotton, for example, was one of the driving forces behind the evolution of Britain’s modern economy. British traders purchased raw cotton fibers from plantations, processed it into cotton cloth in Lancashire mills, and then exported them to the colonial markets including India. Prior to the Industrial Revolution, India had been the world’s main producer of cotton textiles, with a substantial export trade. By the early nineteenth century, however, Britain had taken over dominating the world market for cotton textiles based on technology that lowered production costs . â€Å"This dramatic change in international competitive advantage during the Industrial Revolution was surely one of the key episodes in the Great Divergence of living standards between Europe and Asia. † Britain’s 200-year run ruling India began in the mid-17th century when the British East India Company set up trading posts in Bombay, Madras and Calcutta. In 1757, Robert Clive led Company-financed troops – led by British officers and staffed by native soldiers known as sepoys — in a victory over French-backed Indian forces. The victory at the Battle of Plassey made the East India Company the leading power in the country. It would dominate India for just over 100 years, the area it controlled growing over that time to encompass modern Bangladesh, a majority of southern India and most of the territory along the Ganges River in the north of the country. The East India Company’s control of Bengal alone yielded taxes of nearly  £3 million; by 1818, its territorial revenues in India stood at  £22 million, allowing it to finance one of the world’s largest standing armies. This established British rule well before the Industrial Revolution could have played any major role in Britain expanding its overseas empire, strengthening historians’ – Marshall, et al. – arguments regarding the significance, or lack thereof, of the role mechanization in England had in the country’s expansionist efforts. The fact remains, however, that Britain in the 19th century would become the world’s leading industrial power and India a major source of raw materials for its industry. What’s more, the subcontinent’s population of 300 million would constitute a huge source of revenue and a gigantic market for British-made goods. Although, the English expanded gradually in India during those first 100 years of colonization, once the British government gained control of the country’s administration following the Indian War of Independence in 1857, India was virtually incorporated into the British Empire and became its â€Å"crown jewel. † During the life of the Britain Empire, India was its most profitable colony. Examples of huge returns on British investments in India based on surviving business records are plentiful. To give two examples: Binny and Co. , which was founded in 1799 with 50,000 rupees in capital, returned profits of 140,000 rupees only 12 years later; and William Mackinnon’s Indian General Steam and Navigation Co. , which began trading in 1847 and whose assets five years later were valued at more than nine times the original capital of 72,000 rupees. The 1852 prospectus of the Chartered Bank of India, Australia, and China stated that â€Å"bearing in mind the very high rate of interest which prevails in the East and the very lucrative nature of the Exchange Business †¦ a very large Annual Dividend may be looked for with certainty. British investment in India increased enormously over the second half of the 19th and the beginning of the 20th centuries. According to economist James Foreman-Peck, by the end of 1911, 373 stock companies were estimated to be carrying on business exclusively or almost exclusively in India, yet were registered elsewhere, with the average size of those companies (railways accounted for nearly half of the capital, and tea plantations about one-fifth) dwarfing the far more numerous – 2,463 — Indian-registered companies. The discrepancies between the two are stark. The companies registered outside India had paid-up capital of â‚ ¤77.979 million and debentures of â‚ ¤45.353 million compared to â‚ ¤46.251 million and â‚ ¤6 million, respectively, for Indian-registered companies. According to Foreman-Peck, â€Å"The magnitude of foreign investment and the rate of return on it, broadly defined, have been seen as a means by which empire imposed burdens on colonies and boosted the imperial nation’s economy. † This was not an idea that could only be gleaned in hindsight. Writing at the end of the 19th century, historian Brooks Adams wrote the following: â€Å"Probably since the world began no investment has yielded the profit reaped from the Indian plunder. The amount of treasure wrung from the conquered people and transferred from India to English banks between Plassey and Waterloo (fifty-seven years) has been variously estimated at from $2,500,000,000 to $5,000,000,000. The methods of plunder and embezzlement by which every Briton in India enriched himself during the earlier history of the East India Company gradually passed away, but the drain did not pass away. The difference between the earlier day and the present is that India’s tribute to England is obtained by ‘indirect methods’ under forms of law. It was estimated by Mr.  Hyndman some years ago that at least $175,000,000 is drained away every year from India without a cent’s return. † Plunder and Famine At the time Britain established its colony on the subcontinent, the Indian economy was based predominantly on agriculture. Iyer has shown that since the Indian economy was so dependent on farming, British annexation policy focused on acquiring land with the most agricultural potential, guaranteeing that land taxation would be the East India Company’s/British government’s biggest source of income throughout the colonial period. In 1765-66, the East India Company had collected â€Å"the equivalent of  £1,470,000; and by 1790-1791, this figure had risen to  £2,680,000. † To ensure the land-revenue system, known as â€Å"tax farming,† would continue to supply money to the East India Company’s treasury, the Company introduced the Permanent Settlement of Bengal in 1793, an agreement between it and absentee landlords, known as zaminders. Through this policy, peasants who worked the land became the tenants of the zaminders, who, for themselves and the tax collectors, extracted as much as possible from those who cultivated the land. This settlement created a class of Indian landowners loyal to the English and a division in the rural society between the tenants and landlords, which last well into the 20th century. Indian climate is characterized by the monsoon, which generally includes nine months of dry weather followed by three months of rains known as the monsoon. At least once in a decade, the monsoon fails to arrive and a drought occurs. Indians for centuries had set aside a portion of crops to ensure there would be adequate food in times of drought. This practice was so successful that between the 11th and 18th centuries, India experienced only 14 major famines; yet, from 1765-1858, when it was under East India Company control, India suffered through 16 major famines, followed by an average of one famine every two years under British Colonial Office rule from 1859-1914. Under British rule during the 18th century, over 25 million Indians died of famine between: 1 million between 1800 and 1825, 4 million between 1825 and 1850, 5 million between 1850 and 1875, and 15 million between 1875 and 1900 ; more than 30 million deaths occurred from famine between 1870 and1910. Why did tens of millions die from starvation under the East India Company and the British Raj? Why, comparatively speaking, did so many famines occur under Britain’s watch? Historian Laxman D. Satya argues the famines were price-induced and that timely government intervention could have prevented millions of deaths from starvation. State intervention was minimal, however; Lord Curzon acknowledged once that a famine in Indian excited no more attention in Britain than a squall on the Serpentine. Like other European imperialists in the late 18th century, Britain – first through the East India Company – followed a laissez-faire doctrine whereby government interference in the economy was anathema; in addition, famine later was seen as a natural way to control overpopulation. According to Satya, â€Å"†¦ any act that would influence the prices of grains such as charity was to be either strictly monitored or discouraged. Even in the face of acute distress, relief had to be punitive and conditional. † The powers that be also began using famine labor to build an infrastructure – railways, roads – ensuring that revenues would continue to increase, expenditures would be kept low; worst of all, the new infrastructure allowed for the exportation of grain that could have fed the starving. Studies have shown that even in years of official famine – Britain only recognized three periods of famine — there was never a shortage of food grains. The problem was that with prices for grains so high and wages stagnant, most people could not afford to buy them. As an example, during the Indian Famine of 1887-88, nearly 44 percent of total exports from Berar, one of the hardest hit provinces, were food grains. Between 1874 and 1903 the province exported an average over 40 tons of grain, and Satya has shown that this could have amounted for nearly 30. pounds of food per person. Historian and social commentator Mike Davis has cited even evidence that grains were exported to Europe for speculative trading while millions were dying of starvation. Since the primary concern for the government was maximizing returns on investments, it didn’t prioritize famine relief, considering those expenditures wasteful; therefore, relief camps were â€Å"deliberately kept in remote locations and beyond the reach of the physically weakened population. What’s more, people seeking relief were required to work on colonial projects as a condition for receiving food – as little as 16-22 ounces of food for a minimum of nine-10 hours of often grueling labor Fearing that Indian nationalists would take to the newspapers – in general, the government had a comparatively lax policy toward the press — the Raj implemented tight press control through various laws including the Newspaper Act of 1908 and the Indian Press Act of 1910. It’s important to note that despite these and other attempts at press censorship, a large number of vernacular newspapers were published throughout the country and played an integral role in creating a nationalist/political consciousness in India.

Friday, January 3, 2020

Biography of Richard Aoki, Asian-American Black Panther

Richard Aoki (November 20, 1938–March 15, 2009) was a field marshal in the Black Panther Party, the lesser-known colleague of Bobby Seale, Eldridge Cleaver, and Huey Newton. These names often come to mind when the Black Panther Party is the topic at hand. But after Aokis death, there has been a renewed effort to familiarize the public with this Panther who’s not as well known. Fast Facts: Richard Aoki Known For: Civil rights activist, founder of the Asian American Political Alliance and field marshal of the Black PanthersBorn: November 20, 1938 in San Leandro, CaliforniaParents: Shozo Aoki and Toshiko KaniyeDied: March 15, 2009 in Berkeley, CaliforniaEducation: Merritt Community College (1964–1966), Sociology B.S., University of California at Berkeley (1966–1968), M.S. Social WelfareSpouse: noneChildren: none Early Life Richard Masato  Aoki was born November 20, 1938, in San Leandro, California, the eldest of two sons born to Shozo Aoki and Toshiko Kaniye. His grandparents were Issei, first-generation Japanese Americans, and his parents were Nisei, second-generation Japanese Americans. Richard spent the first few years of his life in Berkeley, but his life underwent a major shift after World War II. When the Japanese attacked Pearl Harbor in December 1941, xenophobia against Japanese-Americans reached unparalleled heights in the U.S. The Issei and Nisei were not only held responsible for the attack but also generally regarded as enemies of the state still loyal to Japan. As a result, President Franklin Roosevelt signed Executive Order 9066 in 1942. The order mandated that individuals of Japanese origin be rounded up and placed in internment camps. The 4-year old Aoki and his family were evacuated first to the Tanforan Assembly Center in San Bruno, California, and then to a concentration camp in Topaz, Utah, where they lived without indoor plumbing or heating. â€Å"Our civil liberties were grossly violated,† Aoki told the Apex Express radio show of being relocated. â€Å"We were not criminals. We were not prisoners of war.† During the politically tumultuous 1960s and 1970s, Aoki developed a militant ideology directly in response to being forced into an internment camp for no reason other than his racial ancestry. Life After Topaz After his discharge from the Topaz internment camp, Aoki settled with his father, brother, and extended family in West Oakland, California, a diverse neighborhood that many African-Americans called home. Growing up in that part of town, Aoki encountered blacks from the South who told him about lynchings and other acts of severe bigotry. He connected the treatment of blacks in the South to incidents of police brutality he’d witnessed in Oakland. â€Å"I began putting two and two together and saw that people of color in this country really get unequal treatment and aren’t presented with many opportunities for gainful employment,† he said. After high school, Aoki enlisted in the U.S. Army, where he served for eight years. As the war in Vietnam began to escalate, however, Aoki decided against a military career because he didn’t fully support the conflict and wanted no part in the killing of Vietnamese civilians. When he returned to Oakland following his honorable discharge from the army, Aoki enrolled in Merritt Community College, where he discussed civil rights and radicalism with future Panthers Bobby Seale and Huey Newton. Black Panther Party Aoki read the writings of Marx, Engels, and Lenin, standard reading for radicals in the 1960s. But he wanted to be more than just well-read. He also wanted to effect social change. That opportunity came along when Seale and Newton invited him to read over the Ten-Point Program that would form the foundation of the Black Panther Party (BPP). After the list was finalized, Newton and Seale asked Aoki to join the newly formed Black Panthers. Aoki accepted after Newton explained that being African-American wasn’t a prerequisite to joining the group. He recalled Newton saying: â€Å"The struggle for freedom, justice and equality transcends racial and ethnic barriers. As far as I’m concerned, you black.† Aoki served as a field marshal in the group, putting his experience in the military to use to help members defend the community. Soon after Aoki became a Panther, he, Seale, and Newton took to the streets of Oakland to pass out the Ten-Point Program. They asked residents to tell them their top community concern. Police brutality emerged as the No. 1 issue. Accordingly, the BPP launched what they called â€Å"shotgun patrols,† which entailed following the police as they patrolled the neighborhood and observing as they made arrests. â€Å"We had cameras and tape recorders to chronicle what was going on,† Aoki said. Asian-American Political Alliance But the BPP wasn’t the only group Aoki joined. After transferring from Merritt College to UC Berkeley in 1966, Aoki played a key role in the Asian-American Political Alliance (AAPA). The organization supported the Black Panthers and opposed the war in Vietnam. Aoki â€Å"gave a very important dimension to the Asian-American movement in terms of linking the struggles of the African-American community with the Asian-American community,† friend Harvey Dong told the Contra Costa Times. In addition, the AAPA participated in local labor struggles on behalf of groups such as the Filipino-Americans who worked in the agricultural fields. The group also reached out to other radical student groups on campus, including those that were Latino- and Native American-based, including MEChA (Movimiento Estudiantil Chicano de Aztlà ¡n), the Brown Berets, and the Native American Student Association. Third World Liberation Front Strike The disparate resistance groups eventually united in the collective organization known as the Third World Council. The council wanted to create a Third World College, â€Å"an autonomous academic component of (UC Berkeley), whereby we could have classes that were relevant to our communities,† Aoki said, â€Å"whereby we could hire our own faculty, determine our own curriculum. In winter of 1969, the council started the Third World Liberation Front Strike, which lasted an entire academic quarter—three months. Aoki estimated that 147 strikers were arrested. He himself spent time at the Berkeley City Jail for protesting. The strike ended when UC Berkeley agreed to create an ethnic studies department. Aoki, who had recently completed enough graduate courses in social work to obtain a master’s degree, was among the first to teach ethnic studies courses at Berkeley. Teacher, Counselor, Administrator In 1971, Aoki returned to Merritt College, part of the Peralta Community College District, to teach. For 25 years, he served as a counselor, instructor, and administrator in the Peralta District. His activity in the Black Panther Party waned as members were imprisoned, assassinated, forced into exile, or expelled from the group. By the end of the 1970s, the party met its demise due to successful attempts by the FBI and other government agencies to neutralize revolutionary groups in the United States. Although the Black Panther Party fell apart, Aoki remained politically active. When budget cuts at UC Berkeley placed the future of the ethnic studies department in jeopardy in 1999, Aoki returned to campus 30 years after he participated in the original strike to support student demonstrators who demanded that the program continue. Death Inspired by his lifelong activism, two students named Ben Wang and Mike Cheng decided to make a documentary about the onetime Panther titled â€Å"Aoki.† It debuted in 2009. Before his death on March 15 of that year, Aoki saw a rough cut of the film. Sadly, after suffering several health problems, including a stroke, heart attack, and failing kidneys, Aoki died on March 15, 2009. He was 70. Following his tragic death, fellow Panther Bobby Seale remembered Aoki fondly. Seale told the Contra Costa Times, Aoki â€Å"was one consistent, principled person who stood up and understood the international necessity for human and community unity in opposition to oppressors and exploiters.† Legacy What distinguished Aoki from others in the black radical group? He was the only founding member of Asian descent. A third-generation Japanese-American from the San Francisco Bay area, Aoki not only played a fundamental role in the Panthers, but he also helped establish an ethnic studies program at the University of California, Berkeley. The late Aoki’s biography based on interviews with Diane C. Fujino reveals a man who counteracted the passive Asian stereotype and embraced radicalism to make long-lasting contributions to both the African- and Asian-American communities. Sources Chang, Momo. Former Black Panther leaves legacy of activism and Third World solidarity. East Bay Times, March 19, 2009.Dong, Harvey. Richard Aoki (1938–2008): Toughest Oriental to Come out of West Oakland. Amerasia Journal 35.2 (2009): 223–32.Fujino, Diane C. Samurai Among Panthers: Richard Aoki on Race, Resistance, and a Paradoxical Life. Minneapolis, University of Minnesota Press, 2012.