Personalization and Content-based Recommender Systems

Personalization is a big trend today. There is so much information available that we need to find new ways to filter, categorize and display data that is relevant.

Recommender systems guide users in a personalized way to interesting objects in a large set of possible options.

Content-based systems try to recommend items similar to those a given user has liked in the past.

The basic process performed by a content-based recommender systems consists of matching up the attributes of a user profile in which preferences and interests are stored, with those of an object. These attributes have been previously collected and is subjected to analysis and modelling with the intent to arrive at a relevant result.

The recommendation process is performed in 3 steps:

  1. The Content Analyzer: When information has no structure, it is the Content Analyzer’s role to provide the structure necessary for the next processing steps. Data items are analyzed by feature extraction techniques to shift item representation from the original information to the target one. This representation is the input for the next 2 steps.
  2. The Profile Learner: This module collects data from the Content Analyzer and tries to generalize it, building a user profile. The generalization strategy is usually performed using machine learning techniques, which are able to infer a model of user interests.
  3. The Filtering Component: This module uses the user profile to suggest relevant items by matching the profile representation to the items being recommended.

The process begins with the “Content Analyzer” extracting features (keywords, concepts, etc.) to construct an item representation. A profile is created and updated for the active user and reactions to the items collected in some way and stored in a repository. These reactions, called feedback or annotations, in combination with the related item description are exploited during the learning of a model to predict the relevance of a newly presented item. Users can also provide initial information to build a profile without the need of feedback.

Generally feedback can be positive and negative and two types of techniques can be used to determine that feedback; implicit and explicit.

Explicit feedback can be obtained by gathering likes/dislikes, ratings and comments, while implicit feedback is derived form monitoring and analyzing the user’s activities.

The “Profile Learner” generates a predictive model utilizing supervised learning algorithms and then stored to be later used by the “Filtering Component”. Users tastes are likely to change over time, so its important to keep information up-to-date to feed back into the “Profile Learner”.

Amongst the advantages of the Content-based recommendation systems are:

  • User independence since recommendations are based solely on the users ratings
  • Transparency since how the systems works in making a particular recommendation can be described in function of content and descriptions; and
  • New item which is capable of being recommended that has not yet been rated by a user.

Content-based recommendation systems also have disadvantages:

  • Limited Content: There is a natural limit in the number and type of features that can be associated with the objects they recommend, therefore the information collected might not be sufficient to define a particular user’s interests.
  • Over-Specialization: Content-based recommendation systems have no way to recommend something unexpected. The system is limited to ranking a number of items based on score and matching them to the user’s profile, solely based on similarities to items that he has already provided positive feedback on. This drawback is also known as “serendipity” problem, showing the tendency of the system to limit its degree of novelty.

 

 

Solving Elusive Problems – Oracle Connectivity Timeout

Hopefully this will help others who come across an issue like this and as a guide on how to approach hard to solve problems.

The error causes the client application to timeout. There is no apparent pattern or specific time of day when its most likely to occur.

The error:

Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for 32-bit Windows: Version 10.2.0.1.0 – Production
Windows NT TCP/IP NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 – Production
Time: 09-JUL-2012 22:12:23
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 60
nt OS err code: 0
Client address: <unknown>

A Google search points to a wide variety of issues but no specific solution (root cause) was found in any of these discussions to address the error.

Links below:

http://www.experts-exchange.com/Database/Oracle/Q_23523923.html

http://www.unix.com/red-hat/187125-tns-timeout-error-when-connecting-sqlplus-through-scripts-only.html

http://blockdump.blogspot.com/2012/07/connection-problems-inbound-connection.html

https://johnpjeffries.wordpress.com/tag/oracle-streams/

http://pavandba.com/category/networking-with-oracle/

http://oracle.veryoo.com/2012/03/tns-12535-tnsoperation-timed-out.html

These problems take an inordinate amount of resources and money to solve because it involves multiple disciplines. It generally starts with the Application Team working on the client error, but soon ends up at a dead end. Months pass with no clear solution in sight as the sporadic nature of the errors make it very time consuming to troubleshoot.

In this particular case an opened case with Oracle resulted in finger pointing.

More resources are assigned to solve the issue, from the network and security teams but each an expert in their own domain. A problem that spans across multiple domains, requires these teams to build bridges to identify and define the issue in pursuit of finding a solution.

Troubleshooting Methodology:

Investigation

  • Problem Statement: Create a clear, concise statement of the problem.
  • Problem Description: Identify the symptoms. What works? What doesn’t?
  • Identify Differences and Changes: What has changed recently? What is unique about this system?

Analysis

  • Brainstorm: Gather Hypotheses: What might have caused the problem?
  • Identify Likely Causes: Which hypotheses are most likely?
  • Test Possible Causes: Schedule the testing for the most likely hypotheses. Perform any non-disruptive testing immediately.

Implementation

  • Implement the Fix: Complete the repair.
  • Verify the Fix: Is the problem really fixed?
  • Document the Resolution: What did we do? Get a sign-off from the business owner.

The process:

A complete understanding from A to Z of the technology at play is fundamental to tackle such a problem, which is why tight team integration and coordination is paramount.

Understanding the Oracle RAC environment is the first step and this video does a pretty good job at laying the foundation.

httpv://www.youtube.com/watch?v=dS9uUXXTTko

 

We need to reduce all variables leaving the client and a single host to communicate with, so we can compare what a normal communication with an abnormal communication.

We need to remove the RAC elements by either shutting down nodes beyond a single server or by removing tnsnames entries in the tnsnames.ora file, so that we connect to a single node and not the whole RAC.

Additionally we should use IP addresses in the file or if names are used, that they are defined in the hosts file so we can rule out any DNS issues.

At this point we can connect an admins friendly tool, Wireshark and mirror traffic from the client to the sniffer.

A normal communication:

1001 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19592 Ack=19023 Win=65535 Len=52
1002 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [PSH, ACK] Seq=19023 Ack=19644 Win=62780 Len=22
1003 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19644 Ack=19045 Win=65513 Len=156
1004 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [PSH, ACK] Seq=19045 Ack=19800 Win=62780 Len=22
1005 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19800 Ack=19067 Win=65491 Len=13
1006 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [PSH, ACK] Seq=19067 Ack=19813 Win=62780 Len=17
1007 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [PSH, ACK] Seq=19813 Ack=19084 Win=65474 Len=10
1008 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [FIN, ACK] Seq=19823 Ack=19084 Win=65474 Len=0
1009 11:40:10 192.168.0.10 192.168.0.101 TCP 4568 > 4655 [FIN, ACK] Seq=19084 Ack=19824 Win=62780 Len=0
1010 11:40:10 192.168.0.101 192.168.0.10 TCP 4655 > 4568 [ACK] Seq=19824 Ack=19085 Win=65474 Len=0

We can see above the host 192.168.0.101 communicating on an arbitrary port with the server on port 4568, which is actually the SID/listener configured for the database. This snippet is the end of a communication as we see the host sending data with TCP flag PSH with an ACK once data is received by the server and answer from the server.

Finally we see the client (192.168.0.101) sending an TCP FIN flag, signaling no more data and asking the server to acknowledge, to which the server replies to, ending for an ACK from the client.

An abnormal communication:

1011 9:45:09 192.168.0.101 192.168.0.10 TCP 4663 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1012 9:45:09 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1013 9:45:11 192.168.0.101 192.168.0.10 TCP 4663 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1014 9:45:11 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1015 9:45:18 192.168.0.101 192.168.0.10 TCP 4663 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1016 9:45:18 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1017 9:45:31 192.168.0.101 192.168.0.10 TCP 4664 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1018 9:45:31 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1019 9:45:34 192.168.0.101 192.168.0.10 TCP 4664 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1020 9:45:34 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)
1021 9:45:40 192.168.0.101 192.168.0.10 TCP 4664 > 4568 [SYN] Seq=0 Win=65535 Len=0 MSS=1460
1022 9:45:40 192.168.0.10 192.168.0.101 ICMP Destination unreachable (Port unreachable)

Above we see what a failed communication which caused the timeout error on the application looks like.

We see the client use an arbitrary port and send a TCP packet with a SYN flag trying to synchronize sequence numbers to begin communications, and the server replies with an ICMP destination unreachable (port unreachable).

We see the client try three times before changing the source TCP port by adding one to the number and trying unsuccessfully three more times, before the application gives up and times out.

Initial Conclusion:

We can conclude that contrary to Oracle’s assertion that it was a network issue, it is not.

The frame was successfully routed across the network, the router ARP’ed for the host, got the response and sent the frame. Furthermore, the intended destination host was on-line and willing to accept the frame into its communication buffer. The frame was then processed by TCP. The protocol TCP tries to send the data up to the destination port number (4568) and the port process didn’t exist or did not reply expeditiously. The protocol handler then reports Destination Unreachable – Port Unreachable.

The solution:

So it’s kick it back to Oracle or find the solution.

A list of possibilities emerged from troubleshooting and forums online, but all patch the issue by increasing timeout parameters either at the application layer or the OS layer; not really addressing the root cause.

  1. Change the database SID
  2. Disable iptables
  3. SQLNET.INBOUND_CONNECT_TIMEOUT=0 change to listener.ora and sqlnet.ora files
  4. Kernel level changes to the OS to increase TCP timeout parameters.

Taking a closer look and comparing the two packet captures, we see that the only difference between them is the source port. The source port is not something you would generally look at when putting in place security because you would lock down your host by whatever port it happens to be listening on and restricting who has access to that port.

Turns out that an automatically generated “iptables” blocked a range of 18 ports (4660-4678) used for (P2P).

Every time the client picked an arbitrary source TCP port to communicate with the server, and it happened to fall within the range of (4660-4678), it would be rejected by “iptables” with an icmp-port-unreachable.

 

Enterprise Backup Network with ANIRA

One of the most critical if not the most critical component of the IT infrastructure is the network, although many times taken for granted. In today’s Client-Server environment and even more with the Cloud Computing model, offices without connectivity to the network become useless in trying to carry out their daily business.

If your business or part of your business is disconnected from the others, it will impact your business in a significant way not including making your customers angry.

This post goes into what it takes to implement a cost-effective backup network, should the primary network link fail.

The scenario described includes multiple remote offices or field locations connected via bonded T1 circuits to an MPLS network. All major services are provided to these remote offices through a central location which is almost always the case, making an outage fatal to the remote office.

Despite redundant T1 circuits providing an aggregate of 3Mbps to the remote office, CRC errors or physical errors on one of the circuits will bring the bonded circuit down; so relying on the 2nd circuit active circuit as backup is a flawed approach.

The router performs only WAN functionality, leaving all other routing and VLAN based-network segmentation and security within the office to a layer-3 capable switch.

The routing protocol of choice is BGP as it is natively used by the MPLS network.

The backup link we are looking for would need to be cost effective, meaning it should not add to the bottom line significantly until it is needed. It would also require sufficient bandwidth for data and voice applications to be ran at an acceptable level from the remote office.

AT&T provides a product that fits this description called ANIRA (AT&T Netgate). There is a minimal monthly rate, a cap of 1Mpbs aggregate bandwidth and additional charge for usage.

This could be done with off-the-shelf equipment in lieu of the ANIRA product but this approach requires additional challenges such as creating the VPN tunnels to equipment at the main office and correct propagation of routes when the main circuit at the remote office goes down. This AT&T service provides the management of the backup devices as well as the connectivity through a VPN tunnel into the MPLS cloud.

The image above illustrates the network topology.

Should the remote office loose  network connectivity, traffic will start to flow through the Netgate which will trigger the device to connect and initiate a VPN tunnel advertising all routes belonging to that office into the MPLS network.

The routing protocol used to determine which path, traffic will take is VRRP or Virtual Router Redundancy Protocol. This will allow the default route used by the switch to float between the main router and the backup device.

Cisco configuration outlined below:

track 1 interface Multilink ip routing

interface FastEtherner0/0
description Internal Network
ip address 192.168.0.2 255.255.255.0
duplex auto
speed auto
vrrp 1 description LAN
vrrp 1 ip 192.168.0.1
vrrp 1 preempt delay minimum 60
vrrp 1 priority 110
vrrp 1 track 1 decrement 100
arp timeout 60

The Netgate device has an IP address of 192.168.0.3 and a VRRP IP address of 192.168.0.1.

A brief description of relevant configuration below:

The VRRP IP address 192.168.0.1 floats between the routers (main router/Netgate) depending which one has the highest priority. The Netgate has a default priority or weight of 50 and an additional 25 when the VPN is connected. In a normal state we want to main router to handle traffic so we force a priority to anything higher than 75 which is the maximum for the Netgate.

vrrp 1 priority 110

To be in a position to decide if the default route should move to the Netgate, we need to know if the T1’s are down. In this example having a T1 down should not be a deciding factor because there is an additional T1 that can handle the traffic, so we chose to monitor the bonded interface at the IP layer.

track 1 interface Multilink ip routing

In the event of an outage the main router will need to lower its priority or weight, below the priority of the Netgate, so that it becomes the new default router with IP address 192.168.0.1.

vrrp 1 track 1 decrement 100

This event will bring the main router’s priority to 10, well below the minimum for the Netgate.

When the main circuit comes back online we want to switch back to it and bring down the VPN tunnel. We accomplish this using the following command: vrrp 1 preempt

However when a T1 comes back up, its usually not a clean process and the telco might also be performing intrusive testing; so its important that we allow some time before we switch traffic back to the main circuit.

vrrp 1 preempt delay minimum 60

Using this configuration should be able to provide an automatic redundant backup network link for remote offices at an affordable price.

E-commerce and The End of Search

Most of us consider the Internet a bucket of miscellaneous tidbits, and the modern search engine our personal assistant. But is that analogy correct? You open your browser, bringing up the Google homepage, then enter whatever term you happen to be looking for at the time and bingo. You get a list of results you then have to “search” through to find what you are looking for. So in fact you are searching through the results of what Google searched for.

Google co-founder Larry Page once described the “perfect search engine” as something that “understands exactly what you mean and gives you back exactly what you want”, far from what Google is today.

 

A recent study titled “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” by researchers at Columbia, Harvard and Wisconsin-Madison universities studied whether the Internet has become our primary transitive memory source–basically an external memory system. These are the conclusions reached by the four controlled experiments in the study:

1) People share information easily because they rapidly think of computers when they find they need knowledge (Expt. 1).

2) The social form of information storage is also reflected in the findings that people forget items they think will be available externally, and remember items they think will not be available (Expts. 2 and 3).

3) Transactive memory is also evident when people seem better able to remember which computer folder an item has been stored in than the identity of the item itself (Expt. 4).

The effect on whether or not we choose to commit certain information to memory when we know the information is readily available on the computer is what is relevant here. We store specific things in specific places, like food in the fridge, but who remembers what is specifically in the fridge?

It is completely natural for people to minimize what needs to be encoded into memory by organizing and then encoding the location of the information, rather than the information itself. This is where the traditional search engine falls short of meeting the basic cognitive needs of humans.

The emergence of the mobile device has been remarkable and Apple’s vision in this space has changed the way people access information. There is data to support the notion that people are not mirroring desktop behavior on mobile devices.

People are not searching on smartphones as much as they do on desktops. Steve Jobs attributes this to the availability of mobile apps and the desktop lacking an app store. In reality, the availability of app, or the lack thereof, is not really the central point. What’s important is information is being categorized, compartmentalized and organized for consumption, and delivered more efficiently through mobile devices. This is clearly a step in the right direction in delivering more relevant and timely information to the user.

Artificial Intelligence will play a major role in the next wave of innovation, starting with Evolving and Adaptive Fuzzy Systems as classification algorithms and then matching the wants with the needs of the user. A recent example of this is an application that gives personalized restaurant recommendations called Alfred—it is all recommendations and no direct search.

GiftWoo takes the next step forward in the e-commerce space in a vertical market. Until now going online to find a gift for your better half involves a search, which results in thousands of choices. Currently, e-commerce websites are designed to deliver a high number of choices, rather than the “right choice” for the consumer. GiftWoo will give the buyer the unique and perfect gift they seek without the searching, by initially building a profile for the gift recipient, then utilizing a proprietary algorithm to match the ideal gift to the profile.

Electronic Health Records and the Cloud

Last year I was recruited to find an Electronic Healthcare Records System (EHR) for a doctor who had just gone through a foiled implementation. I am always intrigued by being exposed to new sectors of technology and learning systems inside out.

The existing EHR system had a hardware failure and the vendor was asking for over $10,000 to recover the patient data. This combined with high maintenance and licensing fees proved to be too much for the doctor.

A consultant came in and sold the doctor on a hosted EHR system he had developed, unfortunately expectations were not set and the doctor was expecting his patient data to be available on this new system. Once it became apparent that there would be an additional cost in the thousands to recover and import this data into the new system the relationship went south.

This particular project was not only a technical but also a customer service challenge. Right from the start I made sure that the expectations were set and began looking at the possible solutions.

Amongst the many options available including traditional vendors, open source, home-grown systems, etc. (Tolven Healthcare, PatientOS, OpenEMR, Clearhealth, Abraxas, Medworks & Pulse)

I was looking to implement something that not only met the requirements (demographics, Medical history, Medications & allergies, Immunization status, Laboratory test results, Radiology images and Billing) of the client but was also scalable as a potential business. I ruled out the traditional EHR systems because of their high capital expenditure, ongoing costs, and approved VAR requirements. The open source solutions seemed very attractive but I was looking for something that did not require an on-site server thus it had to be hosted and using the cloud made it scalable.

So it came down to hosting an open-source package or using someone who had already done the legwork and I didn’t want to support this long term so the search turned 2 or 3 new cloud service providers of which only one I found to be mature enough to recommend; Practice Fusion.

Practice Fusion provides a free, web-based Electronic Medical Record (EMR) system to physicians. With charting, scheduling, e-prescribing, billing, lab integrations, referral letters, unlimited support and a Personal Health Record for patients, Practice Fusion’s EMR addresses the complex needs of today’s healthcare providers and disrupts the health IT status quo.

Although this did not turn out to be a passive income generator which I always have as a goal, it turned out to be a very educational and the platform for other ideas and projects.

Enhanced by Zemanta

Cloud Home Security

For a while I have been wanting to do a brain dump of ideas I have had onto my blog and finally I have the will to make it happen. Many of the ideas I still think would make great businesses but for one reason or another I just didn’t execute them.

 

So I started playing around with the idea of revolutionizing the Home Security industry. This has been a market that has remained pretty much unchanged for a long time. A monitored burglar alarm service that relies on the police as first responders, business model which has put them at odds with law enforcement due to the high incidence of false positives.

 

This industry has had monopolistic tendencies for decades culminating this year in the acquisition of Brink’s Home Security (Broadview Security) by ADT Security Services bringing together the #1 and #2 companies in the US. Despite residential security services being just one of the many markets these companies provide services in, it is definitely the most financially attractive. For years these companies remain in control by forcing competitors out of business by lowering prices below cost.

 

The business model with a change here or there is basically moving into high-growth areas having a recurring service revenue.

 

What caught my attention is that despite advances in technology these companies still rely in their old infrastructure. Yes there are more advanced sensors including passive infrared,  ultrasonic, microwave, photo-electric, smoke, heat, etc and cameras but what was interesting is that for the most part when the alarm goes off a call is made to the monitoring service using a land line and reporting data gathered from these sensors to give the call center some data to act on after a call is made to the home.

 

This is were I think there is an astronomic potential. The value of the data gathered by these sensors would be a gold mine allowing the monitoring service and basic sensors to be provided for FREE and a premium charged for more advanced sensors and surveillance via cameras. A highly sophisticated and integrated system reducing the number of false positives. The system would of course go beyond security monitoring and merge with home automation, and home health monitoring. In order for the system to scale the intelligence in the homes (home security panel) would need to move to the cloud and communicate with a hub inside the home interfacing with multiple sensors, telephones, sprinkler system, entertainment system, electrical system (smart meter), appliances, air conditioning, water heater, and use of home areas by means of “mood” sensors.

 

Sources of income for the business would be advertising, cross-selling smart devices from manufactures, upgrades to premium plans, subscription to additional services such a health monitoring, and selling the raw data collected and even selling the data after qualifying it. Imagine being able to provide bulb companies burn-out rates, provide household advice on their energy uses and how to improve them, water usage and patterns, target marketing based on social status which could easily be determined by energy usage patterns, and mining migrating patterns within the home.

 

The Reality Mining Project was a social experiment conducted by MIT in which hundreds of hours of proximity data were collected by tracking mobile phones over a period of 9 months. Researchers created algorithms that could predict a person’s next actions accurately over 85% of the time. The program also determined social status and relationships as well as create a list of their friends and acquaintances and be right 90% of the time.

 

There is no doubt that this idea would have privacy advocates up in arms but in a world that is highly connected and the boundaries between public and private blur, it becomes a feasible business as long as there is not personal identifiable data.

 

Attached is a deck on the concept.

Enhanced by Zemanta

How to test development on the iPhone

Working on multiple iPhone application projects and shortly looking at the iPad forxcode other development opportunities, I found an excellent step by step guide on creating a development provisioning profile on http://devclinic.com by Kuix, that I thought I should share.

As simple as it may, I thought i’d contribute and write a tutorial on how to get your development application onto your testing device. Due to the exponential speed and memory differences between your development computer and an actual mobile device, it is very important for you to test your application on a mobile device.

Step 1: Certify
This is the hardest step, so please follow the steps closely.
Open Keychain Access application, inside your Application->Utilities folder. Click on Keychain Access->Certificate Assistant->Request a certificate from a certificate Authority… Enter your email, your name, and for CA Email i used my email again(the last one don’t really matter, for these purposes, but it’s required). Choose Saved to disk and click continue. It will then, by default save to your desktop.

Open up Apple Developer Connection inside your browser and login. Go to the Program Portal section and click on Certificates. Choose add certificate. The page will basically tell you to do what i just told u. scroll all the way to the bottom, where you will upload and submit that new certificate, from your desktop. Now your certificate needs to be approved by you/administrator. Simply click on approve, where your certificate is pending. Now, you will have the option of downloading your approved certificate. Download that, and the WWDR intermediate certificate, linked below your certificate. double click on both downloaded certificates to install them into your Keychain. Use login, instead of System, for both.

Once that is done, move on.

Step 2: Device
Now click on Devices. Here you will get your mobile device recognized as a development device. You can develop your app on either an iPhone or an iPod Touch. Open xCode, and go to Window->Organizer Here you will see the device that is currently connected to your computer. notice the long ass identifier key for Identifier:. now inside your program portal, click on Add Devices. you will make up a device name and then copy past that identifier value, you stole from the xCode organizer. Submit.

Step 3: App IDs
Next step: click on the link App IDs from the left hand side, then New App Id. The Description is for your sake, so you know what the ID is for. Choose Generate New, and for Bundle Identifier, it’s like writing an URL backwards.
com.yourCompanyName.AppName then submit that.

Step 4: Provisioning Profile
Almost done! Go to Provisioning page, then New Profile.Choose a memorable profile name. check the box for your approved certificate. Choose your App ID, created from step 3. check the box for your development device. Submit. Now you can download your provisioning profile. Go back to your xCode organizer and add your provisioning profile into the Provision section, under Devices.

Step 5: Load it!
In your app, go to Resources->appName-info.plist Where it says Bundle identifier, change the value to what you entered for your app ID:com.yourCompanyName.AppName.

On the top left hand corner, in the drop down menu, choose Device – 3.0 (if you’re running 3.0 firmware). Build and Go.
If you’ve done everythign correctly, it will build successfully, and your app will be now on your mobile device!

I know it’s a lot of writing, but every step is pertinent. Good luck, guys=)

~Kuix

GrandCentral to Google Voice

In just under a minute I migrated a couple of GrandCentral account to Google Voice and I am very exited to see a transcript of a voicemail show up in my Inbox.

I will definitely miss the GrandCentral interface as its much more intuitive than the new Google Voice GUI.

A limitation currently in place on both platforms is the capability to have 2 different accounts ring one same number. I particularly like this to have a personal and a business number both ring my cell and landlines. The workaround for the moment is leaving an account with GrandCentral and on one Google Voice. Lets see how long that lasts.!

One thing that I have seen more and more recently is my GrandCentral dropping calls on me. Maybe its Google’s way of getting users migrated.

[ad]

Reblog this post [with Zemanta]

Develop an iPhone Application

stanford

With the iPhone Apps store closing in on the 1 billion download mark, its hard to argue that it hasn’t been a huge success and even with the numerous applications available to do just about anything you can think of, there is still room for innovation as long as you keep an open mind and hold on to your imagination.

Standford has made available a course on iTunes that will have you creating your very own application in no time.

http://www.stanford.edu/class/cs193p/cgi-bin/index.php

[ad]

Reblog this post [with Zemanta]

PBX in a Flash with CBeyond

Last week I deployed a PBX in a Flash system using SIPConnect from CBeyond. It was so successful that I will start using PIAF in lieu of Trixbox from now on for all future deployments of this type and will replace my home PBX to take advantage of Skype and Google Voice integration.

In this case I used the Aastra 53i (English edition) VoIP phones which when connected to the network, retrieved an IP from the DHCP server, contacted the PBX using mDNSResponse, checked and downloaded the most recent firmware available on the PBX, and downloaded the default configuration which prompts for a user to login. After login in the phone created a config file on the PBX for future restarts.

These Aastra phones come in 2 editions (The English/American edition and the European edition). The power supply for the European edition has different connectors and the display had symbols instead of words. Apart from that they appeared to be identical but getting the European edition to automatically connect to the PBX and configure itself was very painful, having to reset the phone to factory defaults and erase the local configuration multiple times and finally having to define on the phone the TFTP server (PBX) IP address for it to download the configuration.

Two thumbs up for the PBX in a Flash (PIAF) developers who have done a superb job with this distribution holding up the ideals of the original Asterisk@home open source project.

pbxinaflash

Their documentation was almost flawless although it was difficult trying to find the most recent version of instructions as they are all layed out in bits and pieces across a blog. In pursuit of a perfect install I narrowed down the install to running the iso install, going through the online download and compilation of asterisk and running the update/fix scripts. Now before upgrading/installing any module or OS updates, I downloaded and installed the files necessary to deploy the Aastra phones which is also done by a script and then I proceeded to install/update the software via the FreePBX module admin and finally the OS updates.

Below is the trunk configuration for connecting via SIPConnect to CBeyond from PBX in a Flash:

Outbound caller ID: 5551231234
Never overrride caller ID: checked
Maximum Channels: 6

Outbound Settings

trunk name=cbeyond

allow=ulaw&alaw&gsm&ilbc&g726&adpcm
context=from-trunk
disallow=all
dtmfmode=auto
fromdomain=sipconnect.dal0.cbeyond.net
host=sipconnect.dal0.cbeyond.net
insecure=very
outboundproxy=sip-proxy.dal0.cbeyond.net
qualify=250
secret=[secret-password]
type=peer
username=5551231234

Regitration String: 5551231234:secret-password@cbeyond/5551231234

Note: Notice there is no inbound settings required. DID incoming configuration will determine were each channel from the trunk will ring.

[ad]

Reblog this post [with Zemanta]