Personality Traits and Consumer Behavior

Concept: Construction workers inspecting brainPersonality is something which is very difficult to be explained in one sentence, but it is important to understand the basic idea of personality and consumer behavior before exploring how specific traits can influence decision making.

Personality can be defined as the unique dynamic organization of characteristics of a particular person, physical and psychological, which influences behavior and responses to the social and physical environment.

There has tremendous progress in the field of personality analysis during the 19th and 20th century, which have helped to come up with different types of personality theories. The reason that personality impacts consumers’ behavior can be found from the main four theories of personality. The four theories used to evaluate the human personality are the psychodynamic, humanistic, trait and cognitive social theories.

Psychodynamic theories explain behavior based on unconscious and conscious influences. The most important one being the Freudian theory which comes from Sigmund Freud’s psychoanalytic theory.

Freudian theory itself is based on the existence of unconscious needs or drives as the heart of human motivation and personality. According to Freud, human personality consists of these three systems, the id, super ego and the ego.  The Id is the “warehouse” of primitive drives, basic physiological needs such as hunger, thirst, and sex. The superego drives the individual to fulfill their needs in a socially acceptable function.  Finally, the ego is the internal monitor that balances the needs of the id and the superego.

Humanistic personality theories state that humans continually strive to reach higher levels of achievement, and as such continue to change over time. Every person is born with certain traits; people are considered to be fully functioning when they reach a level of their inborn traits and are no longer swayed by the opinions of others. According to humanistic theories, people who lose sight of the traits they were born with, will not achieve a level of a fully functioning person and therefore will never reach happiness within their lives because they are too concerned with pleasing everyone else around them.

women-shopping

What is particularly interesting is how research has shown that these different personality groups differ in their brand usage.

Unlike Freudian and Neo-Freudian theories, trait theory is less quantitative and more focused on measurement of personality. According to trait theory, a person’s traits determine how the person acts and these traits define the personality of the person. Tests can be performed to measure a single trait in consumers such as how receptive they are to particular themes.

Finally the cognitive-social learning theories state that every person has unique internal values that they live up to, which shapes the person’s personality. The behavior is influenced by the person’s life history, such as immediate environment, past experiences and continued learning throughout life.

The leading brands of the world, through their analysis have come to know that people, during their purchase, follow certain patterns that exist in their subconscious mind. The brands then capitalize on these patterns to attract more customers. Brands are in fact able to identify and categorize consumers, while fine tuning their marketing strategies to reach people in a better and more productive way.

Personality trait theory, shows the most promise is linking personality to an individual’s preference. A trait is a characteristic or individual difference in which one person varies from another in a relatively permanent and consistent way but are common to many individuals.

 

Personalization and Content-based Recommender Systems

Personalization is a big trend today. There is so much information available that we need to find new ways to filter, categorize and display data that is relevant.

Recommender systems guide users in a personalized way to interesting objects in a large set of possible options.

Content-based systems try to recommend items similar to those a given user has liked in the past.

The basic process performed by a content-based recommender systems consists of matching up the attributes of a user profile in which preferences and interests are stored, with those of an object. These attributes have been previously collected and is subjected to analysis and modelling with the intent to arrive at a relevant result.

The recommendation process is performed in 3 steps:

  1. The Content Analyzer: When information has no structure, it is the Content Analyzer’s role to provide the structure necessary for the next processing steps. Data items are analyzed by feature extraction techniques to shift item representation from the original information to the target one. This representation is the input for the next 2 steps.
  2. The Profile Learner: This module collects data from the Content Analyzer and tries to generalize it, building a user profile. The generalization strategy is usually performed using machine learning techniques, which are able to infer a model of user interests.
  3. The Filtering Component: This module uses the user profile to suggest relevant items by matching the profile representation to the items being recommended.

The process begins with the “Content Analyzer” extracting features (keywords, concepts, etc.) to construct an item representation. A profile is created and updated for the active user and reactions to the items collected in some way and stored in a repository. These reactions, called feedback or annotations, in combination with the related item description are exploited during the learning of a model to predict the relevance of a newly presented item. Users can also provide initial information to build a profile without the need of feedback.

Generally feedback can be positive and negative and two types of techniques can be used to determine that feedback; implicit and explicit.

Explicit feedback can be obtained by gathering likes/dislikes, ratings and comments, while implicit feedback is derived form monitoring and analyzing the user’s activities.

The “Profile Learner” generates a predictive model utilizing supervised learning algorithms and then stored to be later used by the “Filtering Component”. Users tastes are likely to change over time, so its important to keep information up-to-date to feed back into the “Profile Learner”.

Amongst the advantages of the Content-based recommendation systems are:

  • User independence since recommendations are based solely on the users ratings
  • Transparency since how the systems works in making a particular recommendation can be described in function of content and descriptions; and
  • New item which is capable of being recommended that has not yet been rated by a user.

Content-based recommendation systems also have disadvantages:

  • Limited Content: There is a natural limit in the number and type of features that can be associated with the objects they recommend, therefore the information collected might not be sufficient to define a particular user’s interests.
  • Over-Specialization: Content-based recommendation systems have no way to recommend something unexpected. The system is limited to ranking a number of items based on score and matching them to the user’s profile, solely based on similarities to items that he has already provided positive feedback on. This drawback is also known as “serendipity” problem, showing the tendency of the system to limit its degree of novelty.

 

 

Solving Elusive Problems – Oracle Connectivity Timeout

Hopefully this will help others who come across an issue like this and as a guide on how to approach hard to solve problems.

The error causes the client application to timeout. There is no apparent pattern or specific time of day when its most likely to occur.

The error:

Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for 32-bit Windows: Version 10.2.0.1.0 – Production
Windows NT TCP/IP NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 – Production
Time: 09-JUL-2012 22:12:23
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 60
nt OS err code: 0
Client address: <unknown>

A Google search points to a wide variety of issues but no specific solution (root cause) was found in any of these discussions to address the error.

Links below:

http://www.experts-exchange.com/Database/Oracle/Q_23523923.html

http://www.unix.com/red-hat/187125-tns-timeout-error-when-connecting-sqlplus-through-scripts-only.html

http://blockdump.blogspot.com/2012/07/connection-problems-inbound-connection.html

https://johnpjeffries.wordpress.com/tag/oracle-streams/

http://pavandba.com/category/networking-with-oracle/

http://oracle.veryoo.com/2012/03/tns-12535-tnsoperation-timed-out.html

These problems take an inordinate amount of resources and money to solve because it involves multiple disciplines. It generally starts with the Application Team working on the client error, but soon ends up at a dead end. Months pass with no clear solution in sight as the sporadic nature of the errors make it very time consuming to troubleshoot.

In this particular case an opened case with Oracle resulted in finger pointing.

More resources are assigned to solve the issue, from the network and security teams but each an expert in their own domain. A problem that spans across multiple domains, requires these teams to build bridges to identify and define the issue in pursuit of finding a solution.

Troubleshooting Methodology:

Investigation

  • Problem Statement: Create a clear, concise statement of the problem.
  • Problem Description: Identify the symptoms. What works? What doesn’t?
  • Identify Differences and Changes: What has changed recently? What is unique about this system?

Analysis

  • Brainstorm: Gather Hypotheses: What might have caused the problem?
  • Identify Likely Causes: Which hypotheses are most likely?
  • Test Possible Causes: Schedule the testing for the most likely hypotheses. Perform any non-disruptive testing immediately.

Implementation

  • Implement the Fix: Complete the repair.
  • Verify the Fix: Is the problem really fixed?
  • Document the Resolution: What did we do? Get a sign-off from the business owner.

The process:

A complete understanding from A to Z of the technology at play is fundamental to tackle such a problem, which is why tight team integration and coordination is paramount.

Understanding the Oracle RAC environment is the first step and this video does a pretty good job at laying the foundation.

httpv://www.youtube.com/watch?v=dS9uUXXTTko

 

We need to reduce all variables leaving the client and a single host to communicate with, so we can compare what a normal communication with an abnormal communication.

We need to remove the RAC elements by either shutting down nodes beyond a single server or by removing tnsnames entries in the tnsnames.ora file, so that we connect to a single node and not the whole RAC.

Additionally we should use IP addresses in the file or if names are used, that they are defined in the hosts file so we can rule out any DNS issues.

At this point we can connect an admins friendly tool, Wireshark and mirror traffic from the client to the sniffer.

A normal communication:

1001 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19592 Ack=19023 Win=65535 Len=52
1002 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [PSH, ACK] Seq=19023 Ack=19644 Win=62780 Len=22
1003 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19644 Ack=19045 Win=65513 Len=156
1004 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [PSH, ACK] Seq=19045 Ack=19800 Win=62780 Len=22
1005 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19800 Ack=19067 Win=65491 Len=13
1006 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [PSH, ACK] Seq=19067 Ack=19813 Win=62780 Len=17
1007 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19813 Ack=19084 Win=65474 Len=10
1008 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [FIN, ACK] Seq=19823 Ack=19084 Win=65474 Len=0
1009 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [FIN, ACK] Seq=19084 Ack=19824 Win=62780 Len=0
1010 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [ACK] Seq=19824 Ack=19085 Win=65474 Len=0

We can see above the host 192.168.0.101 communicating on an arbitrary port with the server on port 4568, which is actually the SID/listener configured for the database. This snippet is the end of a communication as we see the host sending data with TCP flag PSH with an ACK once data is received by the server and answer from the server.

Finally we see the client (192.168.0.101) sending an TCP FIN flag, signaling no more data and asking the server to acknowledge, to which the server replies to, ending for an ACK from the client.

An abnormal communication:

1011 9:45:09 192.168.0.101 192.168.0.10 TCP 4663 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1012 9:45:09 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1013 9:45:11 192.168.0.101 192.168.0.10 TCP 4663 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1014 9:45:11 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1015 9:45:18 192.168.0.101 192.168.0.10 TCP 4663 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1016 9:45:18 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1017 9:45:31 192.168.0.101 192.168.0.10 TCP 4664 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1018 9:45:31 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1019 9:45:34 192.168.0.101 192.168.0.10 TCP 4664 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1020 9:45:34 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1021 9:45:40 192.168.0.101 192.168.0.10 TCP 4664 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1022 9:45:40 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)

Above we see what a failed communication which caused the timeout error on the application looks like.

We see the client use an arbitrary port and send a TCP packet with a SYN flag trying to synchronize sequence numbers to begin communications, and the server replies with an ICMP destination unreachable (port unreachable).

We see the client try three times before changing the source TCP port by adding one to the number and trying unsuccessfully three more times, before the application gives up and times out.

Initial Conclusion:

We can conclude that contrary to Oracle’s assertion that it was a network issue, it is not.

The frame was successfully routed across the network, the router ARP’ed for the host, got the response and sent the frame. Furthermore, the intended destination host was on-line and willing to accept the frame into its communication buffer. The frame was then processed by TCP. The protocol TCP tries to send the data up to the destination port number (4568) and the port process didn’t exist or did not reply expeditiously. The protocol handler then reports Destination Unreachable – Port Unreachable.

The solution:

So it’s kick it back to Oracle or find the solution.

A list of possibilities emerged from troubleshooting and forums online, but all patch the issue by increasing timeout parameters either at the application layer or the OS layer; not really addressing the root cause.

  1. Change the database SID
  2. Disable iptables
  3. SQLNET.INBOUND_CONNECT_TIMEOUT=0 change to listener.ora and sqlnet.ora files
  4. Kernel level changes to the OS to increase TCP timeout parameters.

Taking a closer look and comparing the two packet captures, we see that the only difference between them is the source port. The source port is not something you would generally look at when putting in place security because you would lock down your host by whatever port it happens to be listening on and restricting who has access to that port.

Turns out that an automatically generated “iptables” blocked a range of 18 ports (4660-4678) used for (P2P).

Every time the client picked an arbitrary source TCP port to communicate with the server, and it happened to fall within the range of (4660-4678), it would be rejected by “iptables” with an icmp-port-unreachable.

 

Enterprise Backup Network with ANIRA

One of the most critical if not the most critical component of the IT infrastructure is the network, although many times taken for granted. In today’s Client-Server environment and even more with the Cloud Computing model, offices without connectivity to the network become useless in trying to carry out their daily business.

If your business or part of your business is disconnected from the others, it will impact your business in a significant way not including making your customers angry.

This post goes into what it takes to implement a cost-effective backup network, should the primary network link fail.

The scenario described includes multiple remote offices or field locations connected via bonded T1 circuits to an MPLS network. All major services are provided to these remote offices through a central location which is almost always the case, making an outage fatal to the remote office.

Despite redundant T1 circuits providing an aggregate of 3Mbps to the remote office, CRC errors or physical errors on one of the circuits will bring the bonded circuit down; so relying on the 2nd circuit active circuit as backup is a flawed approach.

The router performs only WAN functionality, leaving all other routing and VLAN based-network segmentation and security within the office to a layer-3 capable switch.

The routing protocol of choice is BGP as it is natively used by the MPLS network.

The backup link we are looking for would need to be cost effective, meaning it should not add to the bottom line significantly until it is needed. It would also require sufficient bandwidth for data and voice applications to be ran at an acceptable level from the remote office.

AT&T provides a product that fits this description called ANIRA (AT&T Netgate). There is a minimal monthly rate, a cap of 1Mpbs aggregate bandwidth and additional charge for usage.

This could be done with off-the-shelf equipment in lieu of the ANIRA product but this approach requires additional challenges such as creating the VPN tunnels to equipment at the main office and correct propagation of routes when the main circuit at the remote office goes down. This AT&T service provides the management of the backup devices as well as the connectivity through a VPN tunnel into the MPLS cloud.

The image above illustrates the network topology.

Should the remote office loose  network connectivity, traffic will start to flow through the Netgate which will trigger the device to connect and initiate a VPN tunnel advertising all routes belonging to that office into the MPLS network.

The routing protocol used to determine which path, traffic will take is VRRP or Virtual Router Redundancy Protocol. This will allow the default route used by the switch to float between the main router and the backup device.

Cisco configuration outlined below:

track 1 interface Multilink ip routing

interface FastEtherner0/0
description Internal Network
ip address 192.168.0.2 255.255.255.0
duplex auto
speed auto
vrrp 1 description LAN
vrrp 1 ip 192.168.0.1
vrrp 1 preempt delay minimum 60
vrrp 1 priority 110
vrrp 1 track 1 decrement 100
arp timeout 60

The Netgate device has an IP address of 192.168.0.3 and a VRRP IP address of 192.168.0.1.

A brief description of relevant configuration below:

The VRRP IP address 192.168.0.1 floats between the routers (main router/Netgate) depending which one has the highest priority. The Netgate has a default priority or weight of 50 and an additional 25 when the VPN is connected. In a normal state we want to main router to handle traffic so we force a priority to anything higher than 75 which is the maximum for the Netgate.

vrrp 1 priority 110

To be in a position to decide if the default route should move to the Netgate, we need to know if the T1’s are down. In this example having a T1 down should not be a deciding factor because there is an additional T1 that can handle the traffic, so we chose to monitor the bonded interface at the IP layer.

track 1 interface Multilink ip routing

In the event of an outage the main router will need to lower its priority or weight, below the priority of the Netgate, so that it becomes the new default router with IP address 192.168.0.1.

vrrp 1 track 1 decrement 100

This event will bring the main router’s priority to 10, well below the minimum for the Netgate.

When the main circuit comes back online we want to switch back to it and bring down the VPN tunnel. We accomplish this using the following command: vrrp 1 preempt

However when a T1 comes back up, its usually not a clean process and the telco might also be performing intrusive testing; so its important that we allow some time before we switch traffic back to the main circuit.

vrrp 1 preempt delay minimum 60

Using this configuration should be able to provide an automatic redundant backup network link for remote offices at an affordable price.

Mobile is going to crush Facebook – NOT

Mark Cuban’s recent article on Facebook IPO Post Mortem and his point that Mobile will crush Facebook. He has several valid points but in my view misses that mobile is a challenge for more than just Facebook.

Facebook is valuable because it makes possible targeted advertising to consumers to a greater extent that has been possible with Google.

Google has been mining data for years and not only what we search. They mine what we share with others over email (gmail users), and then correlate that information with trends based on our geographic location. Does Google know my age? Probably since I have used other services were I have given up this information and if not it could fairly easily put me in a range. Does Google know if I’m single or married or have kids? They can probably guess by scrapping content from my emails. Does Google know what I buy? If I have used Google to pay for it or again scrapping any invoice/receipt that arrives in my inbox. Does Google know what I like to watch? Sure, they own YouTube.

I agree that Facebook has a problem with mobile but the value proposition that it presents to brands is greater than what Google has to offer and furthermore mobile is not a problem for just Facebook, but for all the players whose revenue model rely on income from advertisements.

Advertising has always relied on passive users, thus the strategy was to capture the largest amount of eyeballs using delivery channels such as TV, radio or print media. Google and Facebook make it possible to provide relevant advertisement in a way not possible before.

Mobile will change how brands interact with consumers; from the consumer perspective it offers just in time access to on-demand content, were people reach out to information when they want from where they want. This presents a challenge for the traditional marketing strategy that has perfected the “couch potato” pitch. Today people using mobile devices will not be passively sitting in front of the regular channel, so if a brand expects to get in front of consumers it will have to find a different way.

So what’s different about the mobile platform. Smaller screens, less bandwidth or bandwidth caps. The tablet will remove these limitations but its a totally different device whose use and demographic are also totally different.

Trying to force mobile to become an additional marketing channel in the same way TV or Web is used is a mistake, because the way people interact with mobile is different from the way people interact with TV, radio, print and the personal computer. Realizing this, the handicap of a smaller screen and bandwidth limitations, utilizing the advantages of relevant/targeted advertising and features that mobile brings with it such as location will necessarily spawn new ways for brands to connect with consumers.
Data Mining: How Companies Now Know Everything About You

E-commerce and The End of Search

Most of us consider the Internet a bucket of miscellaneous tidbits, and the modern search engine our personal assistant. But is that analogy correct? You open your browser, bringing up the Google homepage, then enter whatever term you happen to be looking for at the time and bingo. You get a list of results you then have to “search” through to find what you are looking for. So in fact you are searching through the results of what Google searched for.

Google co-founder Larry Page once described the “perfect search engine” as something that “understands exactly what you mean and gives you back exactly what you want”, far from what Google is today.

 

A recent study titled “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by researchers at Columbia, Harvard and Wisconsin-Madison universities studied whether the Internet has become our primary transitive memory source–basically an external memory system. These are the conclusions reached by the four controlled experiments in the study:

1) People share information easily because they rapidly think of computers when they find they need knowledge (Expt. 1).

2) The social form of information storage is also reflected in the findings that people forget items they think will be available externally, and remember items they think will not be available (Expts. 2 and 3).

3) Transactive memory is also evident when people seem better able to remember which computer folder an item has been stored in than the identity of the item itself (Expt. 4).

The effect on whether or not we choose to commit certain information to memory when we know the information is readily available on the computer is what is relevant here. We store specific things in specific places, like food in the fridge, but who remembers what is specifically in the fridge?

It is completely natural for people to minimize what needs to be encoded into memory by organizing and then encoding the location of the information, rather than the information itself. This is where the traditional search engine falls short of meeting the basic cognitive needs of humans.

The emergence of the mobile device has been remarkable and Apple’s vision in this space has changed the way people access information. There is data to support the notion that people are not mirroring desktop behavior on mobile devices.

People are not searching on smartphones as much as they do on desktops. Steve Jobs attributes this to the availability of mobile apps and the desktop lacking an app store. In reality, the availability of app, or the lack thereof, is not really the central point. What’s important is information is being categorized, compartmentalized and organized for consumption, and delivered more efficiently through mobile devices. This is clearly a step in the right direction in delivering more relevant and timely information to the user.

Artificial Intelligence will play a major role in the next wave of innovation, starting with Evolving and Adaptive Fuzzy Systems as classification algorithms and then matching the wants with the needs of the user. A recent example of this is an application that gives personalized restaurant recommendations called Alfred—it is all recommendations and no direct search.

GiftWoo takes the next step forward in the e-commerce space in a vertical market. Until now going online to find a gift for your better half involves a search, which results in thousands of choices. Currently, e-commerce websites are designed to deliver a high number of choices, rather than the “right choice” for the consumer. GiftWoo will give the buyer the unique and perfect gift they seek without the searching, by initially building a profile for the gift recipient, then utilizing a proprietary algorithm to match the ideal gift to the profile.

How Our Decisions are Shaped

Dan Ariely, a professor of behavioral economics at Duke University, presents examples of cognitive illusions that help illustrate why humans make predictably irrational decisions.

EG is the celebration of the American entertainment industry. Since 1984, Richard Saul Wurman has created extraordinary gatherings about learning and understanding. EG is a rich extension of these ideas – a conference that explores the attitude of understanding in music, film, television, radio, technology, advertising, gaming, interactivity and the web – The Entertainment Gathering

Dan Ariely is the Alfred P. Sloan Professor of Behavioral Economics at MIT Sloan School of Management. He also holds an appointment at the MIT Media Lab where he is the head of the eRationality research group. He is considered to be one of the leading behavioral economists. Currently, Ariely is serving as a Visiting Professor at the Duke University, Fuqua School of Business where he is teaching a course based upon his findings in Predictably Irrational.

Ariely was an undergraduate at Tel Aviv University and received a Ph.D. and M.A. in cognitive psychology from the University of North Carolina at Chapel Hill, and a Ph.D. in business from Duke University. His research focuses on discovering and measuring how people make decisions. He models the human decision making process and in particular the irrational decisions that we all make every day.

Ariely is the author of the book, Predictably Irrational: The Hidden Forces That Shape Our Decisions, which was published on February 21, 2008 by HarperCollins.

Enhanced by Zemanta

Railway Bridge Health Monitoring System

In my last post I put forth the idea of using the unique capabilities and UX of the iPhone to help track defects in railways, which came about after my initial conversations with a friend from the railroad industry.

Hours into our conversation I was perplexed at the lack of proactive monitoring of the today’s bridges used by trains for transport.

Should a uniquely located bridge collapse, an energy crisis could ensue as a result of the coal fields in the northeast/midwest being severed from the southwest.

I looked at several existing methods and solutions used today to address this issue and drew from each to conclude in a refined approach to monitoring the health of railway bridges.

There were basically 3 design considerations which needed to be met:

  1. Easy to deploy
  2. Low Maintenance
  3. Long Term

The system had to be easily deploy-able were an electrician in the field could install the components of the solution. Obviously low maintenance is also key, reducing the total cost of ownership and Long Term reducing the need for personnel to visit these bridges.

Application Requirements:

In order to monitor the health of a structure, vibrations of the structure need to be gathered and analyzed to develop a baseline under normal conditions. Subsequent measurements of vibrations can then be compared to the baseline to determine if an anomaly exists.

To accomplish this requirement sensors (3-axis accelerometers) are placed throughout the span of the bridge collecting data. The frequency components of interest range between 0.25-20Hz, the measurements would need to take place 40 secs before and after the passage of the train and time synchronization between the sensors would also be a factor to take into account.

Existing approaches use technology such as Solar panels to supply power in remote areas, GSM for data transmission, GPS for time synchronization and a star topology for the sensors to communicate to a head node which would collect and transmit the data for analysis.

There are multiple problems here since solar panels are expensive, prone to theft, vandalism and damage; GSM data transmission isn’t always viable when there isn’t network coverage in remoter areas and relying on a head node to collect and transmit the data would be like putting all your eggs in one basket. If the head node failed, the system would stop working.

The techniques I came across with basically fell into 2 categories: Existing bridges and new bridges.

I focused on existing bridges since there are very sophisticated things being done with new bridges. Today engineers are embedding sensors and fiber in the concrete while the bridges are being built in order to take measurements, but this approach is obviously not viable for existing bridges.

The methods in use for existing bridges included visual inspection, wired solutions which were bulky, expensive and time consuming to setup and a few wireless solutions some of which were proprietary, not scalable and interesting work from India.

In summary there are several challenges in deploying such a solution at sometimes remote and hostile locations. A lack of power which calls for alternate sources of energy, a way to effectively and reliably collect and transmit the data for analysis and keeping installation and maintenance costs low.

Since the train comes and goes, so can the data collected by the sensors. The train would activate the standby sensors as it approaches the bridge and then collect the data buffered by the sensors after passing the bridge. This approach would deal with the transmission of data limitations while at the same time eliminating the need of power for this component of the system. The train would carry the data and uploaded it to a collection station.

httpv://www.youtube.com/watch?v=PVH1K1Eocz0

To deal with reliability and power requirements the linear path Star Topology would be dropped in favor of a Mesh Network which provides TRUE self-organizing and self-healing properties. On top of the Mesh Network, TSMP (Time Synchronized Mesh Protocol) would be used providing more than 99.9% reliability and the lowest power consumption per delivered packet.

The key for achieving maximum reliability is to use channel hopping, in which each packet is sent on a different channel. In this case, transient failures in links on a given channel are handled gracefully, and persistent link failures that develop after the site survey do not destabilize the network.

Sensors of this type using this approach can last 7-10 years on a small battery meeting the application requirements.

Now to raise some money, build a working prototype and demo it to the Railway companies.

Enhanced by Zemanta