Personality Traits and Consumer Behavior

Concept: Construction workers inspecting brainPersonality is something which is very difficult to be explained in one sentence, but it is important to understand the basic idea of personality and consumer behavior before exploring how specific traits can influence decision making.

Personality can be defined as the unique dynamic organization of characteristics of a particular person, physical and psychological, which influences behavior and responses to the social and physical environment.

There has tremendous progress in the field of personality analysis during the 19th and 20th century, which have helped to come up with different types of personality theories. The reason that personality impacts consumers’ behavior can be found from the main four theories of personality. The four theories used to evaluate the human personality are the psychodynamic, humanistic, trait and cognitive social theories.

Psychodynamic theories explain behavior based on unconscious and conscious influences. The most important one being the Freudian theory which comes from Sigmund Freud’s psychoanalytic theory.

Freudian theory itself is based on the existence of unconscious needs or drives as the heart of human motivation and personality. According to Freud, human personality consists of these three systems, the id, super ego and the ego.  The Id is the “warehouse” of primitive drives, basic physiological needs such as hunger, thirst, and sex. The superego drives the individual to fulfill their needs in a socially acceptable function.  Finally, the ego is the internal monitor that balances the needs of the id and the superego.

Humanistic personality theories state that humans continually strive to reach higher levels of achievement, and as such continue to change over time. Every person is born with certain traits; people are considered to be fully functioning when they reach a level of their inborn traits and are no longer swayed by the opinions of others. According to humanistic theories, people who lose sight of the traits they were born with, will not achieve a level of a fully functioning person and therefore will never reach happiness within their lives because they are too concerned with pleasing everyone else around them.

women-shopping

What is particularly interesting is how research has shown that these different personality groups differ in their brand usage.

Unlike Freudian and Neo-Freudian theories, trait theory is less quantitative and more focused on measurement of personality. According to trait theory, a person’s traits determine how the person acts and these traits define the personality of the person. Tests can be performed to measure a single trait in consumers such as how receptive they are to particular themes.

Finally the cognitive-social learning theories state that every person has unique internal values that they live up to, which shapes the person’s personality. The behavior is influenced by the person’s life history, such as immediate environment, past experiences and continued learning throughout life.

The leading brands of the world, through their analysis have come to know that people, during their purchase, follow certain patterns that exist in their subconscious mind. The brands then capitalize on these patterns to attract more customers. Brands are in fact able to identify and categorize consumers, while fine tuning their marketing strategies to reach people in a better and more productive way.

Personality trait theory, shows the most promise is linking personality to an individual’s preference. A trait is a characteristic or individual difference in which one person varies from another in a relatively permanent and consistent way but are common to many individuals.

 

Personalization and Content-based Recommender Systems

Personalization is a big trend today. There is so much information available that we need to find new ways to filter, categorize and display data that is relevant.

Recommender systems guide users in a personalized way to interesting objects in a large set of possible options.

Content-based systems try to recommend items similar to those a given user has liked in the past.

The basic process performed by a content-based recommender systems consists of matching up the attributes of a user profile in which preferences and interests are stored, with those of an object. These attributes have been previously collected and is subjected to analysis and modelling with the intent to arrive at a relevant result.

The recommendation process is performed in 3 steps:

  1. The Content Analyzer: When information has no structure, it is the Content Analyzer’s role to provide the structure necessary for the next processing steps. Data items are analyzed by feature extraction techniques to shift item representation from the original information to the target one. This representation is the input for the next 2 steps.
  2. The Profile Learner: This module collects data from the Content Analyzer and tries to generalize it, building a user profile. The generalization strategy is usually performed using machine learning techniques, which are able to infer a model of user interests.
  3. The Filtering Component: This module uses the user profile to suggest relevant items by matching the profile representation to the items being recommended.

The process begins with the “Content Analyzer” extracting features (keywords, concepts, etc.) to construct an item representation. A profile is created and updated for the active user and reactions to the items collected in some way and stored in a repository. These reactions, called feedback or annotations, in combination with the related item description are exploited during the learning of a model to predict the relevance of a newly presented item. Users can also provide initial information to build a profile without the need of feedback.

Generally feedback can be positive and negative and two types of techniques can be used to determine that feedback; implicit and explicit.

Explicit feedback can be obtained by gathering likes/dislikes, ratings and comments, while implicit feedback is derived form monitoring and analyzing the user’s activities.

The “Profile Learner” generates a predictive model utilizing supervised learning algorithms and then stored to be later used by the “Filtering Component”. Users tastes are likely to change over time, so its important to keep information up-to-date to feed back into the “Profile Learner”.

Amongst the advantages of the Content-based recommendation systems are:

  • User independence since recommendations are based solely on the users ratings
  • Transparency since how the systems works in making a particular recommendation can be described in function of content and descriptions; and
  • New item which is capable of being recommended that has not yet been rated by a user.

Content-based recommendation systems also have disadvantages:

  • Limited Content: There is a natural limit in the number and type of features that can be associated with the objects they recommend, therefore the information collected might not be sufficient to define a particular user’s interests.
  • Over-Specialization: Content-based recommendation systems have no way to recommend something unexpected. The system is limited to ranking a number of items based on score and matching them to the user’s profile, solely based on similarities to items that he has already provided positive feedback on. This drawback is also known as “serendipity” problem, showing the tendency of the system to limit its degree of novelty.

 

 

Railway Bridge Health Monitoring System

In my last post I put forth the idea of using the unique capabilities and UX of the iPhone to help track defects in railways, which came about after my initial conversations with a friend from the railroad industry.

Hours into our conversation I was perplexed at the lack of proactive monitoring of the today’s bridges used by trains for transport.

Should a uniquely located bridge collapse, an energy crisis could ensue as a result of the coal fields in the northeast/midwest being severed from the southwest.

I looked at several existing methods and solutions used today to address this issue and drew from each to conclude in a refined approach to monitoring the health of railway bridges.

There were basically 3 design considerations which needed to be met:

  1. Easy to deploy
  2. Low Maintenance
  3. Long Term

The system had to be easily deploy-able were an electrician in the field could install the components of the solution. Obviously low maintenance is also key, reducing the total cost of ownership and Long Term reducing the need for personnel to visit these bridges.

Application Requirements:

In order to monitor the health of a structure, vibrations of the structure need to be gathered and analyzed to develop a baseline under normal conditions. Subsequent measurements of vibrations can then be compared to the baseline to determine if an anomaly exists.

To accomplish this requirement sensors (3-axis accelerometers) are placed throughout the span of the bridge collecting data. The frequency components of interest range between 0.25-20Hz, the measurements would need to take place 40 secs before and after the passage of the train and time synchronization between the sensors would also be a factor to take into account.

Existing approaches use technology such as Solar panels to supply power in remote areas, GSM for data transmission, GPS for time synchronization and a star topology for the sensors to communicate to a head node which would collect and transmit the data for analysis.

There are multiple problems here since solar panels are expensive, prone to theft, vandalism and damage; GSM data transmission isn’t always viable when there isn’t network coverage in remoter areas and relying on a head node to collect and transmit the data would be like putting all your eggs in one basket. If the head node failed, the system would stop working.

The techniques I came across with basically fell into 2 categories: Existing bridges and new bridges.

I focused on existing bridges since there are very sophisticated things being done with new bridges. Today engineers are embedding sensors and fiber in the concrete while the bridges are being built in order to take measurements, but this approach is obviously not viable for existing bridges.

The methods in use for existing bridges included visual inspection, wired solutions which were bulky, expensive and time consuming to setup and a few wireless solutions some of which were proprietary, not scalable and interesting work from India.

In summary there are several challenges in deploying such a solution at sometimes remote and hostile locations. A lack of power which calls for alternate sources of energy, a way to effectively and reliably collect and transmit the data for analysis and keeping installation and maintenance costs low.

Since the train comes and goes, so can the data collected by the sensors. The train would activate the standby sensors as it approaches the bridge and then collect the data buffered by the sensors after passing the bridge. This approach would deal with the transmission of data limitations while at the same time eliminating the need of power for this component of the system. The train would carry the data and uploaded it to a collection station.

httpv://www.youtube.com/watch?v=PVH1K1Eocz0

To deal with reliability and power requirements the linear path Star Topology would be dropped in favor of a Mesh Network which provides TRUE self-organizing and self-healing properties. On top of the Mesh Network, TSMP (Time Synchronized Mesh Protocol) would be used providing more than 99.9% reliability and the lowest power consumption per delivered packet.

The key for achieving maximum reliability is to use channel hopping, in which each packet is sent on a different channel. In this case, transient failures in links on a given channel are handled gracefully, and persistent link failures that develop after the site survey do not destabilize the network.

Sensors of this type using this approach can last 7-10 years on a small battery meeting the application requirements.

Now to raise some money, build a working prototype and demo it to the Railway companies.

Enhanced by Zemanta

Diabetes

My first exposure to diabetes was while living in the UK during my teenage years. I remember a girl who used to leave class at a specific time everyday in order to inject herself with insulin. Obviously at that time I was completely ignorant and so were my classmates who made cruel comments about the daily event.

A number of years later my mother developed type 2 diabetes, which was treated by using medication and a diet. Unfortunately dieting was something that turned out really difficult for her, so the use of insulin became necessary.

Medication turned to a device to check sugar levels in the blood and a shot of insulin once a day. Elevations of blood glucose levels lead to damage of the blood vessels, which over the years affected her eyesight, her ability to heal fast from leg and foot wounds and her kidneys. She past away at 69.

The Internet and most recently the move to view it as a platform, brought about the development and evolution of web-based communities such as social-networking sites like “Tu Diabetes” that was founded by my friend Manny Hernandez on March 2007 and today has 5,394 members and going strong.

Both type 1 and type 2 diabetes are at least partly inherited. Type 1 diabetes appears to be triggered by some (mainly viral) infections, or less commonly, by stress or environmental exposure (such as exposure to certain chemicals or drugs). There is a genetic element in individual susceptibility to some of these triggers which has been traced to particular HLA genotypes (i.e., the genetic “self” identifiers relied upon by the immune system). However, even in those who have inherited the susceptibility, type 1 diabetes mellitus seems to require an environmental trigger. A small proportion of people with type 1 diabetes carry a mutated gene that causes maturity onset diabetes of the young (MODY).

There is a stronger inheritance pattern for type 2 diabetes. Those with first-degree relatives with type 2 have a much higher risk of developing type 2, increasing with the number of those relatives. Concordance among monozygotic twins is close to 100%, and about 25% of those with the disease have a family history of diabetes. Candidate genes include KCNJ11 (potassium inwardly rectifying channel, subfamily J, member 11), which encodes the islet ATP-sensitive potassium channel Kir6.2, and TCF7L2 (transcription factor 7–like 2), which regulates proglucagon gene expression and thus the production of glucagon-like peptide-1.[3] Moreover, obesity (which is an independent risk factor for type 2 diabetes) is strongly inherited.[17]

Various hereditary conditions may feature diabetes, for example myotonic dystrophy and Friedreich’s ataxia. Wolfram’s syndrome is an autosomal recessive neurodegenerative disorder that first becomes evident in childhood. It consists of diabetes insipidus, diabetes mellitus, optic atrophy, and deafness, hence the acronym DIDMOAD.[18]

This is something that today I think I should probably look out for, and so my quest for information and prevention begins. 23andMe, a start-up company named after the numbered of paired chromosomes in humans, wants to help you understand what your genes mean by indexing them and highlighting significant findings and Type 2 Diabetes is one of the conditions that 23andMe analyzes.

For the price of $399 through their online store, they will mail you a kit with a test tube that you will send back with a sample of your saliva. After 4 to 6 weeks you will receive a report to better understand your ancestry, genealogy, and inherited traits.

Specifically for Type 2 Diabetes, you will get:

  • An estimate, based on currently available information, on whether your genetic risk of Type 2 Diabetes is higher or lower than average.
  • Your results at 9 markers.
  • A look at how Type 2 Diabetes works, a history of the condition, and a list of counselors, links and support groups for Type 2 Diabetes in your area.

In the United States, almost 21 million children and adults have diabetes, but the rate of new diagnoses is increasing, so I will get going with a visit to the doctor and then order one of these kits.

[ad]

Scientists Erase Mice Memories

In the like of “Total Recall“, “Johnny Mnemonic” and most recently “Paycheck“, scientist have been able to erase specific memories from mice, without damaging the brain.

By manipulating important levels of protein in the brain, certain memories can be erased according to a group of scientist lead by Joe Tsien from the Medical College of Georgia.

Although some experts have suggested that it could be valuable to erase certain memories in people, like traumas during war, Tsien doubts that this could be achieved the way it was done with mice.

“Our work reveals a molecular mechanism of how that can be done quickly and without doing damage to brain cells,” says the Georgia Research Alliance Eminent Scholar in Cognitive and Systems Neurobiology.

Tsien also argued about the ethical and moral implications of erasing people’s memories.

Memory has four distinct stages: learning, consolidation, storage and recall.  It has been difficult to dissect the molecular mechanisms of these stages because researchers lacked techniques to manipulate proteins quickly.  For example, when researchers disable a gene suspected to play a role in the memory process, the deletion typically occurred throughout the entire period so it was impossible to tell which parts of processes were impaired. Previous technology would take several days to switch off a protein, which is the product of a gene.

All of our memories, including those which are emotionally painful have their purpose. It is those memories and experiences which shape our character and makes us who we are.

Medical College of Georgia News – Memories selectively, safely erased in mice

Technology Review – Selectively Deleting Memories

[ad]

Weather Watchers

Looking for a way to keep track of hurricanes? The National Weather Center website not good enough?

The guys over at Stormpulse.com have put together a great resource on hurricane activity.

  1. You can view hurricanes and hurricane season dating back to 1851 by entering in a URL such as: http://www.stormpulse.com/hugo, or http://www.stormpulse.com/1944.
  2. Cloud cover (updated every 6 hours) is available back to 2005. Coverage is still a bit spotty and you may notice some to be missing. In time (literally a matter of weeks), we will have cloud cover back to 2002.
  3. The map interface is meant to be like that of Google Maps–you can click or drag your mouse to pan, and use the + or – buttons at the top-left to zoom in and out.
  4. Clicking a city when a storm is active provides you with wind probabilities for that location over the next 5 days.  On the other hand, clicking on a city doesn’t do anything (yet) when the storm you have selected isn’t an active cyclone. However, it will draw a yellow line and provide the distance from the selected storm (and plotpoint) and the city over which you hover your mouse.
  5. You can interact with storm data at the most granular level by clicking on a plotpoint in the storm’s track. This will jump you to that point in the storm’s history.
  6. We currently have issues with Internet Explorer and the site. If you want to get the most out of the site, we strongly recommend Firefox.
  7. Clicking on a storm in the “2006 Storm Season Summary” should open up a historical description of the storm and pan over to the storm in the map window.
  8. Satellite images update every half-hour or so. We are collecting water vapor, infrared, rgb, visible, and more, but only displaying rgb and ir for now. This will change in the near future, hopefully with a better way of organizing them as well.
  9. Yahoo! News articles are brought in from around the web relating to ‘hurricanes’.
  10. Present weather conditions for land and sea stations are available in a table-format down below the satellite images.
  11. Photos are being pulled in from Flickr that relate to the content ‘in focus’. This works on a limited basis, but if you want to give it a try, go to http://www.stormpulse.com/katrina and watch the photos change in the ‘Tropical Weather Photos’ area of the page–they should go from photos for the 2006 hurricane season to images captured during Katrina.

[ad]

BioInformatics – Part 1

Bioinformatics uses advances in the areas of computer science and information technology to solve complex problems in life sciences and particularly in biotechnology. In the past decades the field of genomics, and more recently, proteomics have matured to the point where research methodologies have grown in reliability and sophistication while the cost per analyses have dropped.

The challenges that technology addresses involve the mining, analysis, visualization, and storing of immense amount of data being generated by these analyses.

The potentials of Bioinformatics in identifying useful genes leading to the discovery and development of new drugs has led to the field becoming more and more computationally intensive. Genomics has revolutionizes drug development, gene therapy and our entire approach to health care and human medicine.

Fundamental Problems in Bioinformatics:

  • Pairwise Sequence Alignment
  • Multiple Sequence Alignment
  • Phylogenetic Analysis
  • Sequence Based Database Searches
  • Gene Prediction
  • Structure Prediction (RNA and Protein)
  • Protein Classification
  • Gene Expression

[ad]

The Small Giant – Nanotechnology

Nanotechnology or Nanoscience is the science of the extremely small – objects smaller than 100 nanometres (0.00001 cm). At these scales, the properties of materials change dramatically. Factors such as Brownian motion, surface stickiness and quantum effects become important.

Nanotubes

Nanotechnologies are based on a range of new materials, including carbon C60, carbon nanotubes, nanoparticles, nanowires, and polymers based on nano-size subunits.

A huge range of applications are possible, based on stronger, lighter or smaller materials, or compounds with unusual optical or electrical properties.

Early applications are enhancing existing products – tennis racquets, golf clubs, and sunscreens. Possible medical applications include better implants, wound dressings, diagnostics and cancer treatments.
Combining biological molecules with nanomechanical components is creating radically new materials; these are at an early stage of development.

The public currently has little input into policy making in science and technology. Environmental concerns focus mainly on nanoparticles but very little is known about their impact on living things.
Nanotechnologies could increase the divide between rich and poor, but could also provide products useful to the developing world and may be easier for poorer countries to take up.

Dye-sensitized solar cells (DSSCs) based on nanocrystalline titanium dioxide (TiO2) thin films are built using a process called microfabrication based on nanotechnology.

These solar cells have reported solar-to-electric energy conversion efficiencies of greater than 10%. Given the low cost of their components and their demonstrated conversion efficiencies, DSSCs show true promise for being an inexpensive, renewable, and environmentally benign alternative to fossil fuels.

Here is a guide by a 7th Grader (Maria Andreina Murillo) on how to create a solar cell by microfabrication using nanomaterials.

[ad]