Tuesday, August 6, 2019
Value Package Introduction in COS
Value Package Introduction in COS Abstract VPI (Value Package Introduction) was one of the core programs in Cummins Operating System (COS). VPI was the process by which the Company defined, designed, developed and introduced high quality Value Packages for customers. One of the key processes in a VPI program was to identify part failures. When a part failure was identified, it was transported to other plant locations. A delay in delivery time from one plant location to another impeded the diagnosis of a part and resulted in a postponement of a critical resolution and subsequent validation. As a proven methodology, customer focused Six Sigma tools were utilized for this project to quantify the performance of this process. Six Sigma was a data-driven approach which was designed to eliminate defects in the process. The project goal was to identify root causes of process variation and reduce the number of days it was taking for a part to move from point of failure to the component engineer for evaluation. The average number of da ys at the start of this project was 137. The goal was to reduce this by 50%. The benefits of performing this project was a reduction in the time it takes for parts to move which impacted the ability to analyze and fix problems in a timely manner and allowed the part to be improved or modified and put back on the engine for further testing. VPI Failed Parts Movement Between Locations Introduction VPI (Value Package Introduction) was one of the core programs in Cummins Operating System (COS). VPI was the process by which the Company defined, designed, developed and introduced high quality Value Packages for customers. The complete VPI package allowed Cummins to continuously improve the product(s) delivered to customers. This project was conducted in an effort to increase the value of these packages. By improving the process of moving parts from one location to another, Cummins has benefited in both cycle time and cost. VPI included all the elements of products which involved services and information that was delivered to the end-user customer. These products included: oil, filters, generator sets, parts, business management tools/software, engines, electronic features and controls, service tools, reliability, durability, packaging, safety and environmental compliance, appearance, operator friendliness, integration in the application, robust design, leak-proof components, ease of service and maintenance, fuel economy, rebuild cost, price, and diagnostic software. These were key factors of customer satisfaction that allowed Cummins to remain competitive and provide quality parts and services to the end customers. This process was essential in surviving among competitors. Statement of the Problem One of the key processes in a VPI program was to identify and resolve part failures. In order to do this in a timely manner, parts needed to travel quickly from the point of failure to the component engineers for diagnosis. Failures were identified at Cummins Technical Center during engine testing. The failed parts were then sent to one of two other locations, Cummins Engine Plant (Cummins Emission Solutions) or the Fuel Systems Plant, where they were to be delivered to the appropriate engineer for diagnosis and part engineering changes. A delay in the diagnosis of a failed part meant a delay in the resolution of the problem and subsequent engine testing. The ideal situation was for a part failure to be identified by the test cell technician, delivered to the engineer, diagnosed by the engineer, and the part redesigned for further testing on the engine. When this did not occur timely, the failed part did not reach the engine again for a sufficient amount of testing. The problem was that parts were either taking a very long time to get into the engineers hands, or the parts were lost. Engines require a pre-determined amount of testing time to identify potential engine failures and associated risks to the customer and the Company. As a result, the opportunity to continually improve parts and processes was missed. Through the use of customer focused six sigma tools this process improved the ability to solve customer problems and achieve company targets. Investigation was required to determine the most efficient process for the transfer of failed parts between different sites within Cummins. Significance of the Problem This process was important in solving part failures. Timely transfer of parts to the correct engineer for analysis reduced the amount of time for issue correction and improved the performance of the engines that were sold to customers. This package allowed Cummins to continuously improve the process and reduce cycle time and cost. This project involved the transportation of VPI failed parts from the point of failure to the appropriate component engineer. The improvements made during this project ensured that parts were received by the engineers in a timely manner which allowed further testing of the re-engineered failed parts. Statement of the Purpose The process of identifying part failures and delivering them to the appropriate component engineer was essential in diagnosing problems and correcting them. Personnel were either not trained in the problem identification area or were unaware of the impact that their work had on the entire process. Communication between the test cell engineers whom identify part failures was important within two areas. First, it was critical that the engineer responsible for the part was notified and secondly, the Failed Parts Analyst (FPA) had to be notified in order to know when to pick up the part for shipping. The partnership between the test cell engineer and the other two areas was a fundamental part of this process in order for it to be successful. Other factors that contributed to the time delay in part failure identification and delivery time was vacation coverage of key employees and training of shipping and delivery personnel. The average number of days for a part to be removed from the tes t cell engine and delivered to the appropriate design engineer was 137 days. Based on the logistics of the locations where the parts were being delivered, this process was improved to be accomplished in less time. The purpose of this project was to reduce the amount of time it was taking for this process to occur. The benefits of performing this project resulted in a reduction in the time it was taking for parts to move which impacted the ability to analyze and fix problems and allowed the part to be improved or modified and put back on the engine for further testing. The improvements derived from this project can be applied to similar processes throughout the multiple business units. Definition of Terms VPI- Value Package Introduction was a program utilized by Cummins in which new products were introduced. It included all the elements of creating a new product such as design, engineering, final product production, etc. COS- Cummins Operating System; the system of Cummins operations which were standard throughout the Company. It identified the manner in which Cummins operated. CE matrix tool that was used to prioritize input variables against customer requirements. FPA- Failed Parts Analyst ; the FPA was the person responsible for retrieving failed parts from the test cells, determining the correct engineer to whom these failed parts were to be delivered to, and prepared the parts for shipping to the appropriate location. SPC- Statistical Process Control; SPC was an application of statistical methods utilized in the monitoring and control of the process. TBE- Time Between Events; In the context of this paper, TBE represented the number of opportunities that a failure had of occurring between daily runs. McParts- Software application program which tracked component progress through the system. It provided a time line from the time a part was entered into the system until it was closed out. Assumptions The assumption was made that all participants in the project were experienced with the software application program that was utilized. Delimitations Only failed parts associated with the Value Package Introduction program were included in the scope of this project. Additionally, only the heavy duty engine family was incorporated. The light duty diesel and mid-range engine families were excluded. This project encompassed three locations in Southern Indiana. The focus of this project was on delivery time and did not include packaging issues. It also focused on transportation and excluded database functionality. Veteran employees were selected for collecting data. The variable of interest considered was delivery time. Data collection techniques were limited to first shift only. The project focusd on redesigning an existing process and did not include the possibility of developing a new theory. Limitations The methodology used for this project did not include automation of the process as a step. RFID was a more attractive way to resolve this problem; however, it was not economically feasible at the time. The population was limited since the parts that were observed were limited to heavy duty engines which reduced variations in the size and volume of parts. Time constraints and resource availability was an issue. Due to team members residing at several locations, meeting scheduling was more problematic. Additionally, coordinating team meetings was a challenge because room availability was limited. Review of Literature Introduction The scope of this literature review was intended to evaluate articles on failed parts within Value Package Introduction (VPI) programs. However, although quality design for customers is widely utilized, the literature on Value Package Introduction was rather scarce. VPI was a business process that companies used to define, design, develop, and introduce high quality packages for customers. VPI included all the elements of products which involved services and information that was delivered to the end-user customer. One of the key processes in a VPI program was to problem -solve part failures, which was the direction this literature review traveled. Methods This literature review focused on part/process failures and improvements. The methods used in gathering reading materials for this literature review involved the use of the Purdue University libraries: Academic Search Premier, Readers Guide, and Omni file FT Mega library. Supplementary investigation was conducted on-line where many resources and leads to reference material were found. All of the references cited are from 2005 to present with the exception of a Chrysler article dated 2004 which was an interesting reference discussing the use of third party logistic centers, a journal article from 1991 that explains the term, cost of quality, which is used throughout this literature review, and two reference manuals published by AIAG which contain regulations for ISO 9001:2000 and the TS16949 standards. Keywords used during researching included terms such as scrap, rework, failed parts and logistics. Literature Review Benchmarking. Two articles, authored by Haftl (2007), concentrated on the mixture of metrics needed to optimize overall performance. Some of these metrics included completion rates, scrap and rework, machine uptime, machine cycle time and first pass percentages. ââ¬Å"According to the 2006 American Machinist Benchmarking survey, leading machine shops in the United States are producing, on average, more than four times the number of units produced by other non-benchmarked shops. Also worth noting is that they also reduced the cost of scrap and rework more than four times.â⬠(Haft, 2007, p.28). The benchmark shops showed greater improvement than other machine shops. ââ¬Å"The benchmark shops cut scrap and rework costs to 4.6 percent of sales in 2006 from 6.6 percent three years ago, and all other shops went to 7.8 percent of their sales in 2006 from 9.3 percent three years agoâ⬠(Haftl, 2007, p.28). The successful reduction of scrap and rework costs by the benchmark shops w ere contributed to several factors. First, training was provided to employees and leadership seminars were held. Secondly, these shops practiced lean manufacturing and lastly, they had specific programs which directly addressed scrap and rework. Whirlpool, one of the nations leading manufacturers of household appliances, had used benchmarking as a means of finding out how they rated in comparison to their competitors. They benchmarked their primary competitor, General Electric. As a result, they discovered what improvements they could make that could be managed at a low investment. The improvement processes were especially useful and applied in existing strengths of the company. They rolled out a new sales and operating plan based on customer requirements (Trebilcock, 2004). Quality. An overall theme contained in all of the articles reviewed was that of quality. In Staffs review (2008), hecontended that regardless of a companys size, quality was critical in maintaining a competitive advantage and retaining customers. The Quality Leadership 100 is a list of the top 100 manufacturers who demonstrated excellence in operations. The results were based on criteria such as scrap and rework as a percentage of sales, warranty costs, rejected parts per million, the contribution of quality to profitability, and share holder value. Over 800 manufacturers participated in this survey. The top three manufacturers for 2008 were listed as: #1 Advanced Instrument Development, Inc. located in Melrose Park, IL, #2 Toyota Motor Manufacturing in Georgetown, KY., and Utillmaster Corp. Wakarusa, IN. (Staff, 2008). In an article written by Cokins (2006) the author stressed that quality was an important factor in improving profitability. He informed the reader that quality manage ment techniques assisted in identifying waste and generating problem solving approaches. One of the problems he cited regarding quality was that it was not often measured with the appropriate measuring tools. As a result, organizations could not easily quantify the benefits in financial terms. Obstacles that affected quality was the use of traditional accounting practices. The financial data was not captured in a format that could easily be applied in decision making. Because quantifiable measures lacked a price base to compare the benefits, management often perceived process improvements as being risky. Cost of Quality (COQ), was the cost associated with identifying, avoiding and making corrections to defects and errors. It represented the difference between actual costs and reduced costs as a result of identifying and fixing defects or errors. In Chens report (ChenAdam,1991), the authors continued to breakdown cost of quality into two parts, the cost of control and the cost of failure. They explained that cost of control was the most easily quantifiable because it included prevention and measures to keep defects from occurring. Cost of control had the capability to detect defects before a product was shipped to a customer. Control costs included inspection, quality control labor costs and inspection equipment costs. Costs of failure included internal and external failures and were harder to calculate. Internal failures resulted in scrap and rework, while external failures, resulted in warranty claims, liability and hidden costs such as loss of customers (ChenAdam, 1991). Because co st of control and cost of failure were related, managing these two element reduced part failures and lowered the costs associated with scrap and rework. Tsarouhas (2009, p.551) reiterated in his article on engineering and system safety , that ââ¬Å"failures arising from human errors and raw material components account for 25.06% and 5.35%, respectively, which is about 1/3 of all failuresâ⬠¦.â⬠. ââ¬Å"A rule of thumb is that the nearer the failure is to the end-user, the more expensive it is to correctâ⬠(Cokins, 2006, p. 47). Identification of failed parts was a key process of Value Package Introduction and key to identifying and correcting failures before they reached the customer. A delay in the diagnosis of a defective part resulted in the delay or a miss to the implementation of a critical fix and subsequent validation. When a delay occurred, the opportunity to continually improve parts and processes was not achieved. In a journal article written by Savage Son ( 2009), the authors affirmed that effective design relied on quality and reliability. Quality, they lamented, was the adherence to specifications required by the customer. Dependability of a process included mechanical reliability (hard failures) and performance reliability (soft failures). These two types of failures occurred when performance measures failed to meet critical specifications (Savage Son, 2009). Tools and specifications. The remaining articles discussed in this literature review focused on tools and specification that were utilized across the business environment. Specifications were important aspects of fulfilling a customers needs. Every company had its own unique way of operating, so businesses often had slightly different needs (Smith, Munro Bowen, 2004, p. 225). There were a number of tools that were available to help meet specific customer requirements. Quality control systems and identification of failed parts were among these tools. The application of statistical methods was used to make efforts at improvement more effective. Two common statistical methods that were used are those that were associated with statistical process control and process capability analysis. The goal of a process control system was to make predictions about the current and future state of a process. A process was said to be operating in statistical control when the only sources of variation were common causes (Down, Cvetkovski, Kerkstra Benham, 2005, p. 19). Common causes referred to sources of variation that over time produced a stable and repeatable distribution. When common causes yielded stable results then the output was considered to be predictable. SPC involved the use of control charts though an integrated software package. In an article by Douglas Fair (2008), he viewed product defects from the eyes of the consumer. He stated that to truly leverage SPC to create a competitive advantage, key characteristics had to be identified and monitored. (Fair, 2008) The means for monitoring some of these characteristics involved the use of control charts. An article written on integrated control charts, introduced control charts based on time-between-events (TBE).These charts were used in manufacturing companies to gauge the reliability of parts and service related applications. An event was defined as an occurrence of a defect and time referred to the amount of time bet ween the occurrence of defect events (Shamsuzzaman, Min, Ngee Haiyun, 2008). Process capability was determined by the variation that came from common causes. It represented the best performance of a process. Other writers deemed that one way to improve quality and achieve the best performance was to reduce product deviation. The parameters they used included the process mean and production run times (Tahera, Chan Ibrahim, 2007). Peter Roost (2007) favored the use of Computer-Aided Manufacturing tools as a means of improving quality. According to the author, CAM allowed a company to eliminate errors that cause rework and scrap, improved delivery times and simplified operations, and identified bottlenecks which assisted in efficient use of equipment (Roost, 2007). Other articles on optimization introduced a lot size modeling technique to identify defective products. Lot-sizing emphasized the number of units of an item that could be produced without interruption on the machinery used in the production process (Buscher Lindner, 2007). Conclusion In this literature review the importance of failed part identification was presented. The impact that quality and reliability had on this process was indicative of the value that proper measuring tools provide. Through the use of customer focused tools the identification and correction of failed parts was more easily accomplished and allowed a quicker resolution to customer problems. Benchmarking was discussed as a means of comparing outputs to those of competitors. Benchmarking was the first step in identifying areas requiring immediate attention. Haftl ( 2007) and Trebilcock (2004) devoted their articles to benchmarking and the impact it had on identifying areas demanding immediate improvement processes. Staff (2008), Cokins (2006), Tsarouhas (2009), and Savage Son (2009) spent more time discussing the critical requirement of quality and the affects it had on competitive advantage. Lastly, authors Smith, Munro Bowen (2004), Down (2005), Cvetkovski, Kerkstra Benham (2005), Fair ( 2008), Tahera, Chan Ibrahim (2007), and Roost (2007) discussed the different specifications and tools used in improving quality and identifying failures. The articles involving benchmarking were concise and easy to understand. A similarity among all of the articles is the census that quality was important in identifying and preventing failures and that competitive advantage cannot be obtained without it. Gaps identified through this literature review were the methods of making process improvements. Several of the authors had their own version of the best practice to use to improve performance. The articles on tools and specifications were very technical and discussed the different methods. In Fairs article,the author had a different perspective than any of the other articles reviewed. He wrote from the view of a consumer. Methodology This project built on existing research. Documentation was reviewed to determine the methodology used in previous process designs. The purpose of this project was to redesign the process flow to improve capability and eliminate non-value added time. Team members were selected based on their vested interest in the project. Each team member was a key stakeholder in the actual process. A random sampling technique was in which various components were tracked from point of failure to delivery. McParts, a software application program, was utilized to measure the amount of time that a component resided in any one area. Direct observation was also incorporated. A quantitative descriptive study was utilized in which numerical data was collected. The DMAIC method of Six Sigma was used. The steps involved in the DMAIC process were: Define project goals and the current process. Measure key aspects of the current process and collect relevant data. Analyze the data to determine cause-and-effect relationships and ensure that all factors are being considered. Improve the process based upon data analysis. Control the process through the creation and implementation of a project control plan. Process capability was established by conducting pilot samples from the population. In the Define stage, the ââ¬Å"Yâ⬠variable objective statement was established- Reduce the amount of time it takes for a failed part to go from point of failure to the hands of the evaluating engineer by 50%. Next, a data collection plan was formed. The data was collected using the McParts component tracking system. Reports were run on the data to monitor part progression. In the second stage, Measure stage, a process map was created which identified all the potential inputs that affected the key outputs of the process. It also allowed people to illustrate what happened in the process. This step was useful in clarifying the scope of the project. Once the process map was completed, a Cause Effect matrix was developed. The Cause Effect matrix fed off of the process map and key customer requirements were then identified. These requirements were rank ordered and assigned a priority factor to each output (on a 1 to 10 scale). The process steps and materials were identified and each step was evaluated based on the score it received. A low score indicated that the input variable had a smaller effect on the output variable. Conversely, a high score indicated that changes to the input variable greatly affected the output variable and needed to be monitored. The next step involved creating a Fault Tree Analysis (FTA). The FTA was used to help identify the root causes associated with particular failures. A measurement system analysis was then conducted. Measurement tools such as McParts software application program as well as handling processes were reviewed. Next, an initial capability study was conducted to determine the current processes capability. Next, a design of experiment was established. The design of experiment entailed capturing data at various times throughout the project. Six months of data was obtained prior to the start of the project to show the current status. Once the project was initiated, data was collected on a continuous basis. Finally, once the project was complete, data was collected to determine stability and control of the process. Once the experiment was completed and the data was analyzed, a control plan was created to reduce variation in the process and identify process ownership. All of the above steps included process stakeholders and team members whom assisted in creating each output. Data/Findings Define. The purpose of this project was to reduce the number of days it was taking a part to move from point of failure to the component engineer for evaluation. Through the use of historical data, 2 of the 17 destination location for parts were identified as being problematic. The average number of days it was taking parts to be delivered to the component engineer at the Fuels Systems Plant and Cummins Engine Plant (Emission Solutions) location was 137 days. Both sites were located in the same city where the part failures were identified. Key people involved in performing the various functions in part failures and delivery were identified and interviewed. Measure. A process map was created documenting each step in the process including the inputs and outputs of each process (Figure 1). Once the process was documented, the sample size was determined. Of the 3,000 plus parts, those parts delivered to the two sites were extrapolated, resulting in a sample size of 37 parts. Parts were then tracked using a controlled database called McParts. From this point, key steps identified were utilized in creating a Cause Effect matrix. The CE matrix prioritized input variables against customer requirements. The Cause Effect matrix was used to understand the relationships between key process inputs and outputs. The inputs were rated by the customer in order of importance. The top 4 inputs identified as having the largest impact on quality were: Incident (part failure) origination, appropriate tagging of parts, failed parts analyst role, and addressing the tag part to the correct destination. The Cause Effect matrix allowed the team to narrow down the list and weight the evaluation criteria. The team then did a Fault Tree Analysis (FTA) on possible solutions. The FTA analyzed the effects of failures. The critical Xs involved the amount of time for filing an incident report and tagging parts, the amount of time it takes for the FPA to pick up the parts from the t est cells once the part failure is identified, and the staging and receiving process. Next, validation of the measurement system was conducted. An expert and 2 operators were selected to run a total of 10 queries in the McParts database using random dates. The results of the 2 operators as shown in figure 2 was then scored against each other (attribute agreement analysis within appraisers) and that of the experts (appraiser versus standard) The next logical step was to determine if there was a difference between the types of test performed and the length of time it was taking a part to be delivered to the appropriate component engineer. There were two types of tests performed, Dyno and Field tests. Figure 6 shows the median for field tests was a little better than the Dyno tests which came as a surprise because field test failures occur out in the field and occur at various locations. The Dyno tests are conducted at the Technical Center. The data drove further investigation into the outliers which showed that out of approximately 25 of these data points 8 were ECMs, 5 were sensors, 7 were wiring harnesses, 1 was an injector, and 4 were fuel line failures. These findings were consistent with the box plot on days to close by group name. ECMs, sensors, wiring harnesses, and fuel lines have the highest variance. The similarities and differences in the parts were reviewed and it was discovered that they are handled by differ ent groups once they reached FSP. The Controls group handled ECM, Sensors, and Wiring Harnesses. The XPI group handled Accumulators, Fuel lines, Fuel pumps, and Injectors. Drilling down further, another box plot was created to graphically depict any differences in the two different tests for both sites. The boxplot then showed that CES dyno had a much higher median and higher variability than CESs field tests and Fuel Systems dyno and field tests. (See figure 7 below) An IMR chart was created for dyno field tests without special causes. The data was stable but not normal. A test of equal variances was run for CES and FSP dyno and field tests. Based on Moods Median there is no difference in medians. This was likely due to small sample size in 3 of the 4 categories; however CES dyno test had a lot of variation and would require further investigation. An IMR chart and box plot was run on the data for XPI and Controls group at the Fuel Systems Plant. The data was stable but not normal. Next, a test of equal variance was run which showed that the variances were not equal. Thus, the null hypothesis that the variability of the two groups was equal was rejected. Next, attention was directed towards the Fuel Systems Plant. A boxplot was created from the data which showed there was a statistical difference between medians for FSP Control group and XPI. Through the solutions derived from the DMAIC methodology of Six Sigma, the project team had performed statistical analysis which proved that there would be benefits obtained by resolving the problems that were identified. The changes were implemented and a final capability study was performed on the data which showed an 84% reduction in the number of days it took a part to move from point of failure to the hands of the component engineer for evaluation. Improvements were documented and val idated by the team. To ensure that the performance of the process would be continually measured and the process remained stable and in control, a control plan was created and approved by the process owner responsible for the process. Conclusions/ Recommendations The goal of this project was to reduce the number of days it was taking to move a part from point of failure to the component engineer for evaluation. This goal was accomplished and final capability of the process shows a reduction in time by 84% from 137 days to 22 days.There were 4 critical problems identified during this project whic
Monday, August 5, 2019
Energy Efficient Firefly Scheduling in Green Networking
Energy Efficient Firefly Scheduling in Green Networking An energy Efficient Firefly Scheduling in Green Networking with Packet Processing Engines S.S.Saranya S.Srinivasan Abstract-The investigation of force sparing system gadgets has been situated as of late on Theoretical With the point of controlling force consumption in center systems, we consider energy mindful gadgets ready to lessen their energy prerequisites by adjusting their execution. We propose new algorithm for scheduling the errand to diverse pipelines to adjust the energy consumption in systems administration. The firefly algorithm (FA) is a meta heuristic algorithm, propelled by the blazing conduct of fireflies. The main role for a fireflys blaze is to go about as a sign framework to pull in different fireflies. blended whole number straight programming structure that takes care of the virtual topology issue under the correspondence delay imperative. A self-assertive optical system has been considered with distinctive separations between the hubs and diverse connection limits. We are utilizing after ventures to minimize the energy consumption (1) Packet Segmentation for maintaining a st rategic distance from the impact in single pipeline. (2) Firefly Algorithm for streamlining the distinguishing the pipe line. The motivation behind our work is to minimize the energy consumption in general system. Keywords Packet Segmentation, Green network technologies, Firefly Algorithm. I. INTRODUCTION The likelihood of adjusting system energy prerequisites to the real movement load. In fact, it is extraordinary that system connections and gadgets are by and large provisioned for occupied or surge hour load, which normally surpasses their normal usage by a wide edge. In spite of the fact that this edge is at times arrived at, system gadgets are composed on its premise and, subsequently, their energy consumption stays pretty much steady even in the vicinity of fluctuating activity load. In this manner, the key of any best in class force sparing criteria lives in alertly adjusting assets, gave at the system, connection, or supplies level, to current movement necessities and burdens. In this admiration, current green network technologies approaches[1] have been taking into account various energy related criteria, to be connected specifically to system gear and part interfaces. Green network technologies [3] is the act of selecting energy productive systems administration advancements and items, and minimizing asset use at whatever point conceivable. Green network technologies is an expansive term alluding to methods used to enhance systems administration or make it more proficient. This term reaches out to and spreads forms that diminish energy consumption, and additionally forms for rationing transfer speed or some other methodology that will at last decrease energy consumption and, in a roundabout way, cost. The issue of green network technologies has numerous critical applications, particularly as energy gets to be more lavish and individuals get to be more aware of the negative impacts of energy consumption on nature. A portion of the fundamental techniques connected with green network technologies include solidifying gadgets or generally streamlining an equipment setup. Programming virtualization [4] and proficient server consumption can add to this general objective. Green network technologies could likewise incorporate such differing thoughts as remote work area, energy use in structures lodging equipment, or other fringe parts of a system foundation. Thoughts connected with green network technologies likewise address tech administrations or client connections that may at last be based on a system. This incorporates green pursuit or investigations of the energy consumption of web indexes, alongside numerous different sorts of examination of cutting edge systems and frameworks. As per various studies, IT can devour up to 2 percent of a countrys aggregate energy generation. A great part of the exploratory information conveyed by ESnet and individual exploration and instruction (RE) systems is C Gang et al. pick blaze stations which possess certain flame spread capacity and moderately minimal effort for separation as target blend. Fire stations touch base at mischance focuses and behavior salvage work, to minimize the misfortune in entire mishap. In routing , the forwarding engine [9], sometimes called the data plane, defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Transmit data as fast as possible, return to Low-Power Idleââ¬â Highest rate provides the most energy-efficient transmission (Joules/bit)ââ¬â LP_IDLE consumes minimal power (Watts).Energy savings come from cycling between Active Low-Power Idle ââ¬â Power is reduced by turning OFF unused circuits during LP_IDLE (e.g. portions of PHY, MAC, interconnects, memory, CPU).Energy consumption scales with bandwidth consumption. Raffaele Bolla et al. [10] raise the same concern in their work save energy by scaling their traffic processing capacities through AR and LPI mechanisms. The rest of the paper is organized as follows: Section II describes the Related work of less energy consumption Based on Green network technique. Section III portrays the Investigation of proposed methods. The Test results are shown in the Section IV. II. RELATED WORKS FLARE strategy [10] is conceivable to methodicallly cut a TCP stream crosswise over numerous ways without creating packet reordering. Srikanth Kandula et al. (2007) FLARE, another movement part algorithm. FLARE misuses a straightforward perception. Consider burden adjusting movement more than a set of parallel ways. On the off chance that the time between two progressive packets is bigger than the greatest deferral contrast between the parallel ways, one can course the second packet and resulting packets from this stream on any accessible way with no danger of reordering. In this way, as opposed to exchanging packets or streams, FLARE switches packet blasts, called owlets. Element burden adjusting needs conspires that part activity crosswise over various ways at a fine granularity. Current movement part plots, be that as it may, display a tussle between the granularity at which they segment the activity and their capacity to stay away from packet reordering. Packet based part rapidly doles out the sought burden offer to every way. Power administration abilities [2] inside architectures and segments of system gear. R. Bolla et al.(2007) considering the two principle sorts of force administration equipment help, today accessible in the biggest piece of COTS processors and under quick improvement in other equipment advances [11] (e.g., system processors, ASIC and FPGA). These force administration advancements individually permit minimizing force consumption when no exercises are performed (in particular, unmoving enhancements), and to change the exchange off in the middle of execution and energy when the equipment is dynamic and performing operations (specifically, power state improvements). These sorts of force administration backing are by and large acknowledged at the equipment layer by fueling off sub-segments, or by changing the silicon working recurrence and voltage. Load Migration technique [8] With remote asset virtualization, numerous Mobile Virtual Network Operators (MVNOs) can be upheld more than an imparted physical remote system and movement stacks in a Base Station. Xiang Sheng et al. a general enhancement system to guide algorithm outline, which takes care of two sub issues, pipe task and burden distribution, in arrangement. For pipe task, this paper exhibit a rough guess algorithm For burden allotment, we introduce a polynomial-time ideal algorithm for an extraordinary situation where BSs are force relative and in addition two successful heuristic algorithms for the general case. Furthermore, this paper exhibit a successful heuristic algorithm that mutually tackles the two sub issues. Fire asset scheduling model[15] on the ground of significant perils, where time constraint of real dangers and genuine circumstance of flame asset can be considered on all sides. Along these lines, in accordance with the bear capable misfortune and time restriction of significant risks, GOU Gang et al. pick flame stations which claim certain blaze spread capacity and generally ease for separation as target mix. Fire stations touch base at mischance focuses and behavior salvage work, to minimize the misfortune in entire mishap. Linux piece system subsystem [12] the Tx/Rx Soft IRQ and Q plate are the connectors between the system stack and the net gadgets. A configuration confinement is that they accept there is just a solitary passage point for every Tx and Rx in the hidden equipment. In spite of the fact that they function admirably today, they wont later on. Present day system gadgets (for instance, E1000 and IPW 2200 prepare two or more equipment Tx lines to empower transmission parallelization or MAC-level QoS. These equipment characteristics cant be upheld effectively with the current system subsystem. Z. Yi et al. (2007) depicts the outline and execution for the system multi line patches submitted to mailing records early not long from now, which included the progressions for the system scheduler, Q circle, and non specific system center APIs. III. INVESTIGATION OF PROPOSED METHODS A pipeline is a situated of information transforming components joined in arrangement, where the yield of one component is the info of the following one. Op 1 In 1 Output In 2 In 3 Op 2 In 4 Fig 1.Parallel pipeline Fig 1.shows the components of a pipeline are regularly executed in parallel or in time-cut manner; all things considered, some measure of cradle stockpiling is frequently embedded between components. The packet preparing framework is particularly intended for managing the system movement. Pipe 1 Data Aggregation Pipe 2 Segmentation Pipe 3 Scheduling Pipe 4 Fig 2. Framework Architecture Fig2. shows System Architecture speaks to Parallel Processing of diverse pipe lines. In this framework, Fire fly Scheduling algorithm for viably plan the info movement load for burden adjusting. The Distributed Load transformed by the distinctive pipelines. Packet segmentation enhances system execution by part the packets in got Ethernet outlines into discrete cushions. Packet segmentation may be in charge of part one into different so that solid transmission of every one can be performed exclusively. Segmentation may be obliged when the information packet is bigger than the most extreme transmission unit backed by the system. The packet preparing framework can be prepared in any layer of the system, either in the top of the line center switches or in the LAN switches. The adaptability of the framework originates from the programmable components inside it, i.e. NPs. Furthermore a progression of stacked system conventions ensure its capacity to accomplish the execution particular. Fire fly algorithm is utilized for packet scheduling. The firefly algorithm [14] is a meta heuristic algorithm, enlivened by the blazing conduct of fireflies. The main role for a fireflys blaze is to go about as a sign framework to draw in different fireflies. In assignment task process, packets appropriate crosswise over parallel pipe lines. In this Module, divided Data lumps appointed into Queue for transforming of information. This oversees Work load dissemination to different parallel pipelines. This module words at transmitting end. A.Algorithm The firefly algorithm is a meta heuristic algorithm [16], roused by the blazing conduct of fireflies. The basic role for a fireflys blaze is to go about as a sign framework to pull in different fireflies. Xin-She Yang [17]formulated this firefly algorithm by accepting: 1. All fireflies are unisexual, so that one firefly will be pulled in to all different fireflies; 2. Engaging quality is relative to their shine, and for any two fireflies, the less brilliant one will be pulled in by (and subsequently move to) the brighter one; then again, the splendor can diminish as their separation increments; 3. On the off chance that there are no fireflies brighter than a given firefly, it will move arbitrarily. The splendor ought to be connected with the target capacity. Firefly algorithm is a nature-enlivened meta heuristic enhancement algorithm. B. Algorithm Description The pseudo code can be summarized as: Begin 1) Objective function: 2) Generate an initial population of fireflies 3) Formulate light intensity so that it is associated with f (for example, for maximization problems, or simply 4) Define absorption coefficient While (t for i = 1 : n (all n fireflies) for j = 1 : n (n fireflies) if , move firefly i towards j; end if Vary attractiveness with distance r via exp ; Evaluate new solutions and update light intensity; end for j end for i Rank fireflies and find the current best; end while Post-processing the results and visualization; End The main update formula for any pair of two fireflies and is where is a parameter controlling the step size, while is a vector drawn from a Gaussian or other distribution. It can be shown that the limiting case corresponds to the standard Particle Swarm Optimization (PSO). In fact, if the inner loop (for j) is removed and the brightness is replaced by the current global best , then FA essentially becomes the standard PSO. The should be related to the scales of design variables. Ideally, the term should be order one, which requires that should be linked with scales. For example, one possible choice is to use where is the average scale of the problem. In case of scales vary significantly, can be considered as a vector to suit different scales in different dimensions. Similarly, should also be linked with scales. For example, The pipe line is a customer server transforming framework. Approaching streams can be taken care of by any subset of the pipelines. Every customer sent the information to server for preparing. The preparing is held in server and returns the outcome once more to server. The AR and LPI components for every pipeline to rapidly deal with the motor setup keeping in mind the end goal to ideally adjust its energy consumption regarding system execution. IV. TEST RESULTS This area portrays the execution investigation to accept the proposed algorithm. Exploratory results show the proficiency of the proposed Firefly algorithm. Fig 3. Energy Consumption Fig 3 delineates the Energy Consumption in parallel pipe line .The Energy consumption shifts in parallel pipelines as per time. In this work, Incoming packet are sectioned into various little packets and apportioned to diverse pipelines. These packets doled out to pipe lines taking into account size of the pieces by utilizing fire fly algorithm. The information packet 4 take 18 sec for handling and the information packet 5 take 18 sec for preparing. The less measure of time speak to the low energy consumption. Information packet 4,5 expend less energy. Fig 4. Busy-Idle cycle Fig4. Delineates the busy-idle state in parallel pipe line. We propose new scheduling algorithm that timetable the packets to diverse pipe lines in light of the limit of pipeline and pieces. V.CONCLUSION In this paper, we propose new scheduling algorithm to minimize the energy consumption in Parallel Pipe line System. The firefly algorithm (FA) is a meta heuristic algorithm, roused by the glimmering conduct of fireflies. The main role for a fireflys glimmer is to go about as a sign framework to draw in different fireflies. Firefly-based algorithms for scheduling undertaking diagrams and occupation shop scheduling obliges less figuring than all other meta heuristics. Firefly algorithm can tackle streamlining issues in dynamic situations proficiently. The accomplished results show how the proposed model can viably speak to energy and system mindful execution files. In addition, additionally an improvement system in view of the model has been proposed and tentatively assessed. REFERENCES [1] Raffaele Bolla, Roberto Bruschi, Alessandro Carrega, and Franco Davoli ââ¬Å"Green Networking With Packet Processing Engines: Modeling and Optimizationâ⬠IEEE/ACMTransaction Networking,Vol.22,No.1,Feb2014. [2] A.Bolla and R. Bruschi, ââ¬Å"Energy-aware load balancing for parallel packet processing engines,â⬠in Proc. 1st IEEE GREENCOM, Sep. 2011, pp. 105ââ¬â112. [3] ââ¬Å"LowEnergyConsumptionNETworks(ECONET)project,â⬠2010[Online]. Available: http://www.econet-project.eu [4] ââ¬Å"Energy eFFIcient teChnologIEs for the Networks of Tomorrow (EFFICIENT) project,â⬠2010 [Online]. Available: http://www.tnt.dist. unige.it/efficient [5] ââ¬Å"GreeningtheNetwork(GreenNet)project,â⬠2012[Online].Available: http://www.tnt.dist.unige.it/greennet [6] B. Heller et al. , ElasticTree: saving power in data center networks, Proceedings of USENIX NSDIââ¬â¢2010. [7] S. Kandula, D. Katabi, S. Sinha, and A. Berger, ââ¬Å"Dynamic load balancing without packet reordering,â⬠Comput. Commun. Rev., vol. 37, pp. 51ââ¬â62, Mar. 2007. [8] R.Bolla,R.Bruschi,A.Carrega,andF.Davoli,ââ¬Å"Greennetworktechnologies and the art of trading-off,â⬠in Proc. 30th IEEE INFOCOM Workshops, Shanghai, China, Apr. 2011, pp. 301ââ¬â306. [9] R. Bolla, R. Bruschi, F. Davoli, and F. Cucchietti, ââ¬Å"Energy efficiency in the future Internet: A survey of existing approaches and trends in energy-aware fixed network infrastructures,â⬠IEEE Commun. Surveys Tut., vol. 13, no. 2, pp. 223ââ¬â244, 2nd Quart., 2011. [10] Z. Yi and P. Waskiewicz, ââ¬Å"Enabling Linux network support of hardwaremultiqueuedevices,â⬠inProc.LinuxSymp.,Ottawa,ON,Canada, Jun. 2007, vol. 2, pp. 305ââ¬â310. [11] J. Kennedy and R. Eberhart, Particle swarm optimisation, in: Proc. of the IEEE Int. Conf. on Neural Networks, Piscataway, NJ, pp. 1942-1948 (1995). [12] S. Nandy, P. P. Sarkar, A. Das, Analysis of nature-inspired firefly algorithm based back-propagation neural network training, Int. J. Computer Applications, 43(22), 816 (2012). [13] S. Palit, S. Sinha, M. Molla, A. Khanra, M. Kule, A cryptanalytic attack on the knapsack cryptosystem using binary Firefly algorithm, in: 2nd Int. Conference on Computer and Communicationà Technology (ICCCT), 15-17 Sept 2011, India, pp. 428432 (2011). [14] R.Bolla,R.Bruschi,F.Davoli,andA.Ranieri,ââ¬Å"Energy-awareperformanceoptimizationfornext-generationgreennetworkequipment,â⬠in Proc. 2nd ACM SIGCOMM PRESTO, Barcelona, Spain, Aug. 2009, pp. 49ââ¬â54. [15] X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, UK, (2008). [16] X. S. Yang, Firefly algorithms for multimodal optimisation, Proc. 5th Symposium on Stochastic Algorithms, Foundations and Applications, (Eds. O. Watanabe and T. Zeugmann), Lecture Notes in Computer Science, 5792: 169-178 (2009). [17] X. S. Yang, Engineering Optimisation: An Introduction with Metaheuristic Applications, John Wiley and Sons, USA (2010).
Sunday, August 4, 2019
Camden County Essay -- Social Issues, Drugs, Violence
Mobsters, drugs, and violence sounds like a plot for a 50ââ¬â¢s gangster movie but it is the everyday life for people living in Camden County, New Jersey. The city is portrayed as falling apart, over run with corruption and violence in Chris Hedges article ââ¬Å"City of Ruins.â⬠Soon Camden County will become a forgotten ghost town if they do not make drastic changes with the government, education system, and bring jobs back to the county. This article is about the city of Camden and how they went from being a thriving city to a city that is now in economic crisis. The city has a population of 70,390 and is the poorest city in the nation (16). Camden has an unemployment rate of 30-40% and has an average household income of 24,600 (16). In the past Camden was an industrial giant with several large companies like Campbellââ¬â¢s soup and RCA having factories there, which employed 36,000 people (17). Closing of the factories is one of the main reasons for Camden Counties' high unemployment rate. Over the past few years Camden has been forced to make ââ¬Å"$28 million in draconian budget cuts, with officials talking about cutting 25 percent from every department, including layoffs of nearly half the police forceâ⬠(16). With the lack of funds the counties education system is beginning to suffer with them having to cut the library rate by 2/3, now they have aââ¬Å"70 percent high school dropout rate, with only 13 p ercent of students managing to pass the stateââ¬â¢s proficiency exams in mathâ⬠(16).With all of the empty factories, empty houses, and vacant lots Camden is beginning to become a very unappealing and unhappy place to live. Living in Camden is becoming an unhealthy place to live. It has become over run with homeless people and ââ¬Å"the only white people visi... ...ys a better side to the situation waiting to be found ââ¬Å"Despite Camden's bleakness, despite its crime and its deprivation, despite the lost factory jobs that are never coming backââ¬âdespite all this, valiant souls somehow rise up in magnificent defianceâ⬠(18). The town of Camden is trying to rebuild the town by building new buildings like the aquarium and new law school but Hedges clearly states that nothing in this town will prosper if the mob does not want it to Camden County is a town in complete and total ruins. They are over run with corruption and violence. Hedges' article does a good job of portraying just how bad off the town truly is. The economy is suffering, the education system is lacking in several areas and the police force is a joke. If changes are not made soon Camden County, they will no longer be a town that ant one will want to live in or visit.
Saturday, August 3, 2019
psychology Essay -- essays research papers
Chapter 2 à à à à à This article is from the April 2003 issue of Psychology Today. In chapter 2, behavior is the main topic. Behavior is a bit unexplainable , but it can be put into form of patterns or predictions. Also, behavior is uncontrolled, but can be changed to a small degree with the use of medicine or a good diet. This article ââ¬Å"Fighting Crime One Bite At A Timeâ⬠tells how a good diet can maybe decrease the number of rule breaking by prisoners in jail. This article relates how changing ones nutrition can change their behavior. This article showed an experiment where 231 inmates were either given vitamin supplements and the others to fake pills to see which group would break the rules more. The vitamin group broke the rules 25% less than the others did. This is pretty interesting how giving criminals the right nutrition requirements may change their behavior. Chapter 3 à à à à à Chapter 3 is talks about sensation and perception with our eyes. Our eyes effect how we think and perceive things. Our eye turns a wavelength into light in which the path of the light goes through the pupil then iris then to the retina, which contains cones and rods. This article from Lets Live named ââ¬Å"Obesity Increases Cataract Riskâ⬠relates how being obese may effect the development of cataracts in your eyes. A cataract is a cloudiness or opacity in the normally transparent crystalline lens of the eye. This cloudiness can cause a decrease in vision an...
Friday, August 2, 2019
Feminism and Equal Rights Essay -- Opportunities, Organized Activity, W
Feminism is the belief in equal rights and opportunities, in organized activity, in support of womenââ¬â¢s rights and interests, and also in the theory of political, economic, and social equality of the sexes (Merriam-Webster). Typically, the word ââ¬Å"feminismâ⬠has a negative connotation associated with it and feminists are stereotyped as closed minded, man hating, ugly, and whiny, among many other things. However, these stereotypes are much exaggerated and while they may be true of some feminists, most are normal women who could not be picked out from a crowd. Modern day feminists are following in the footsteps of their ancestors who starting in the late 1800ââ¬â¢s have participated in three major feminist movements (Stockton). The first of these movements occurred in the late 19th and early 20th centuries. The goal of the first wave was to open more opportunity doors for women with a main focus on suffrage. The wave officially began at the Seneca Falls Convention in 1848 (Stockton). Here, over 300 men and women rallied for the equality of women (Ruether). In its early stages, feminism was often related to temperance and abolitionist movements. This first wave of feminism movements if often referred to as the ââ¬Å"Suffrage Movementâ⬠(Gender Press). This movement helped give voice to many early stage feminist advocates who are famous today, some of which include Sojourner Truth, Elizabeth Cady Stanton, and Susan B. Anthony. These women fought for the right to vote, a privilege that was reserved for men (Stockton). This movement transformed into something much larger when the National Womenââ¬â¢s Rights Convention was formed a few years later. This movement led to the 19th Amendment being passed in 1920. The 19th Amendment outlawed gender- biased vot... ...//genderpressing.wordpress.com/2013/08/26/feminism-the-first-wave-2/>. 9. "." Encyclopedia Britannica Online. Encyclopedia Britannica, n.d. Web. 14 May 2014. . 10. "The History of Second Wave Feminism." Suite. N.p., n.d. Web. 14 May 2014. . 11. "Third Wave Foundation." Third Wave Foundation History Comments. N.p., n.d. Web. 15 May 2014. . 12. "A Manifesto for Third Wave Feminism." Alternet. N.p., n.d. Web. 15 May 2014. . 13. Duca, Lauren. "A Definitive Guide To Celebrity Feminism In 2013." The Huffington Post. TheHuffingtonPost.com, 22 Dec. 2013. Web. 15 May 2014. http://www.huffingtonpost.com/2013/12/22/celebrity-feminisn_n_4476120.html.
Thursday, August 1, 2019
Bullying And Teen Suicide
Bullying is done purposefully to hurt, threaten or scare someone. It can be done orally with words or physically with actions. One or more persons can involve in bullying and degree of cruelty also varies. Bullying can include name calling, teasing, stopping the person from going where he/she want to go or from doing what he/she want to do, or injuring someone physically.Bullies usually have average or above-average self-confidence, look for recognition or attention from peers, find pleasure from causing injury to others, make themselves look strong, look to control other people or conditions, and are expressed as hot-tempered and rash (Zirpoli, 2008). Bullies are common among students that come from families having little tenderness or affection. Parents of bullies monitor their children very little and use discipline inconsistently. Parents of bullies also employ inflexible discipline styles, where physical punishment is very common (DeHann, 1997).Students often present the same be havior observed within their home atmosphere including rude behavior displayed by parents toward each other or toward others. Bullies are not generally model students. Very frequently, they are not successful in school and have poor relations with their teachers. Bullies have trouble with social skills, not capable of making friends easily, and do not even know healthier ways to connect to others. Bullying effects Being a victim of bullying is very traumatic for children. Short term effects of bullying include developing hatredness to go to school.Many victims start to disbelieve all their classmates at school and face problems in making friends. Some victims can develop physical illness or depression. The long term effects of bullying include damage of childââ¬â¢s health that continues into adult life. It increases anxiety, damages self-esteem and can cause severe depression. Some children even get suicidal thoughts and commit suicide. The Phoebe Prince, 15, a freshman at South Hadley High School in Western Massachusetts, is an example of teen suicide for bullying.Prince hanged herself at her home on January 14th, 2010 as she was subjected to physical mistreatment and verbal harassment on that day (CNN, 2010). Earlier that day, she had been harassed at South Hadley High School library when she was studying. The harassment took place in front of a staff member and a lot of students, but nobody of whom informed it until after the death of the girl. Phoebe was also even harassed when she was walking through the school hall on that day and was walking on the street towards her house.The bullies also threw a canned drink at her while she was walking home. One male and two female students were involved in the harassment on January 14th. The harassment has been provoked by the groupââ¬â¢s disapproval with short dating connection of Phoebe with a male student. But, that dayââ¬â¢s events were not the only reason for the death of Phoebe; she has been harassed verbally and threatened to harm physically since three months until the death of hers. The group, who bullied Phoebe, crossed their normal limits and exceeded the normal teenage related quarrels.The bullying group was also decided to disgrace her and to make it impracticable for Phoebe to continue at school. She has also been harassed on the internet using social networking sites. But, the bullying was mainly conducted on school premises during school hours (Eckholm & Zezima, 2010). Therefore, bullying can have serious negative consequences, even death, which happened in Phoebe Prince case. Phoebe took her own life to escape from bullying in school, on Face Book, and through text messages. Therefore, anti-bullying laws need to be implemented and bullies should be punished severely.References CNN (2010). More students disciplined following girlââ¬â¢s suicide. Retrieved March 31, 2010 from http://www. cnn. com/2010/CRIME/03/30/massachusetts. bullying. suicide/index. html DeHann, L. (1997). Bullies. Retrieved February 1997 from http://www. ag. ndsu. edu/pubs/yf/famsci/fs570w. htm Eckholm, E. & Zezima, K. (2010). 6 teenagers are charged after classmateââ¬â¢s suicide. Retrieved March 29, 2010 from http://www. nytimes. com/2010/03/30/us/30bully. html Zirpoli, T. J. (2008). Bullying behavior. Retrieved from http://www. education. com/reference/article/bullying-behavior/
The weak are forced to create alternative realities Essay
The brain is a crucible: a melting pot of intersecting ingredients that forges a reality that is deceptively the same, but often vastly different for each individual. That reality is a construct is a fashionable term these days; it means that we tend to see reality from a particular frame of reference. There is always a context, whether it be political, social or cultural. For those who are unable to construct a satisfactory reality, it is then that they are forced to create an alternative reality, perhaps that fulfils their dreams and meets their views and values. In the words of cognitive neuropsychologist Kaspar Meyer, ââ¬Å"what is now clear is that the brain is not a stimulus-driven robot that directly translates the outer world into a conscious experience. What weââ¬â¢re conscious of is what the brain makes us be conscious of, and in the absence of incoming signals, bits of memories tucked away can be enough for a brain to get started withâ⬠. Reality for each individual differs according to their past experiences and memories, as well as what they choose to perceive to be true. Those with weaker frames of minds ââ¬â such as individuals suffering from mental disorders, or solely living under delusion ââ¬â tend to create alternative realities in order to escape the harsh truth. Consider the materialism of the post-war United States. Motivated by prosperity and wealth, all Americans were expected to achieve the profound ââ¬ËAmerican Dreamââ¬â¢, of which Arthur Miller critiques throughout his play ââ¬ËDeath of a Salesmanââ¬â¢. The playââ¬â¢s lead character Willy Loman struggles to face the true reality, but instead, chooses to believe he is leading the life he had always dreamt of. Willy believes himself to be the best salesman of his company, claiming he is ââ¬Å"well likedâ⬠by all, and ââ¬Å"vital in New Englandâ⬠, when in fact, his true reality proves to be quite the opposite. Willy struggles to pay his mortgage, as well as fails to support and provide for his family. Despite his favourite son Biff finding the words to call him out to be what he truly is ââ¬â ââ¬Å"(a) fakeâ⬠¦ (a) big phoney fakeâ⬠and ââ¬Å"a dime a dozenâ⬠, Willy remains ignorant towards the truth. Willyââ¬â¢s alternative reality provides him with the motivation to continue his life, despite the loss of his job and loss ofà respect from Biff. Alternative realities provide temporary relief from the harsh truth of reality, which is sometimes necessary for those who are considered mentally weak. It is often easier to support the alternative realities created by the mentally weak. Due to their mental state, disregarding what they believe to be true can carry several consequences. In ââ¬ËDeath of a Salesmanââ¬â¢, Willyââ¬â¢s wife Linda remains supportive throughout her husbandââ¬â¢s delusion. He claims she is his ââ¬Å"foundation (and) supportâ⬠, which is simply conforming to the expected role of a 1950ââ¬â¢s housewife. Another example includes the 2010 movie directed by Martin Scrosese titled ââ¬ËShutter Islandââ¬â¢, which clearly highlights the importance of accepting the alternative realities created by the mentally weak. The filmââ¬â¢s protagonist Teddy Daniels believes himself to be a U.S marshal assigned to investigate the disappearance of a patient from Bostonââ¬â¢s Shutter Island mental institution. However, in true fact, Teddy is actually Andrew Laeddis, one of the institutionââ¬â¢s most dangerous patients they have because of his delusions and his violence towards the staff and the other patients. Andrew (or Teddyââ¬â¢s) delusion created an alternative reality in which he was able to escape the truth about his murderous past. In order to support his alternative reality, the staff at the institution developed a scenario in which Andrew was able to live out his delusion, therefore preventing the otherwise dangerous psychological effects of his true nature. If Andrew was in fact exposed to his true reality rather than living as his alter ego, he may have not been able to survive, hence proving the importance of supporting a mentally weak individualââ¬â¢s alternative reality. Alternative realities may not always be negative. In these cases, the alternative reality protects the individual from harm or negative attention due exposing their true self. Consider the death of Whitney Houston, or the even more recent Robin Williams. Despite their true reality consisting of depression and substance abuse, these two renowned celebrities developed and maintained an alternative reality to allow others to portray them as role models and successful artists. In the case of Robin Williams, his severeà depression led to his suicide. As a comedian and successful actor, Williams was perceived by the majority to be a motivated happy man. In true fact, despite working to ensure other people were laughing, he was diagnosed with severe depression, to the point where he eventually took his own life. Robin Williamââ¬â¢s alternative reality forced others to see him as he was not, but without the negative attention of showing who he really was. In Whitney Houstonââ¬â¢s case, despite her perception as an iconic successful singer, her alternative reality consisted of a cocaine addiction to the point where she drowned in a hotel bathtub. Following their deaths, the public was finally made aware of who they truly were, regardless of what we had previously perceived them to be. Alternative realities such as these can be crucial to ensure happiness and satisfaction for the individual, without highlighting their true selves to the world. Those who are mentally weak tend to create alternative realities in order to avoid their true selves. Whether they are living within a delusion ââ¬â such as Willy Loman ââ¬â or suffering from a mental condition ââ¬â such as Andrew Laeddis, (otherwise known as Teddy), alternative realities may be beneficial for the individual, however difficult for others to accept. Due to individual differences in realities due to social, emotional, cultural and political factors, each person must construct a reality that is most suitable for their views and values, even if that results in alternative realities being created. In the words of author Mignon McLaughlin, ââ¬Å"a critic can only review the book he has read, not the one in which the author wroteâ⬠, and therefore we cannot judge an individualââ¬â¢s choice of reality or alternative realities without experiencing it ourselves first hand.
Subscribe to:
Posts (Atom)