Skip to main content

Posting xml data to url not in the browser for Ezpprints in Rails

Rails Posting xml data to url not the browser?

##Add this one
require 'net/https'
require 'rexml/document'
##Posting xml data to url in rails
xml = xml = "<?xml version='1.0' encoding='UTF-8'?>
        <user>
         <firstname>#{@user.first_name}</firstname>
         <address1>#{@user.address}</address1>
         <city>#{@user.city}</city>
         <state>#{@user.state}</state>
         <zip>#{@user.zip}</zip>
         <countrycode>#{@user.country}</countrycode>
         <phone>#{@user.phone}</phone>
        </user>"
http://www.test.com -->This is not a URL you can navigate to in a browser. It will only be accessible with an XML post.
1)url = URI.parse('http://www.test.com')
2)request = Net::HTTP::Post.new(url.path)
3)request.content_type = 'text/xml'
4)request.body = xml
5)response = Net::HTTP.start(url.host, url.port) {|http| http.request(request)}
6)res = response.body
our response is like
"<?xml version="1.0"?>
<shippingOptions>
  <order orderid="1234">
     <option type="FC" price="9.95" shippingMethod="USFC" description="Economy to United States"/>
     <option type="PM" price="9.95" shippingMethod="USPM" description="Express to United States"/>
     <option type="SD" price="15.95" shippingMethod="USSD"
        description="Second Business Day to United States"/>
     <option type="ON" price="23.95" shippingMethod="OVNT"
        description="Next Business Day to United States"/>
  </order>
</shippingOptions>"
###for getting the values of xml for response
7)res= document
8)doc = REXML::XPath.each(document, "*//option type") { |element|
    type = element.attributes["type"]
    cost =  element.attributes["price"]
    a = Array.new
    method =  element.attributes["shippingMethod"]
    description =  element.attributes["description"]
   a << {:id => type,
        :service => method,
        :price => cost   }
9) p a.inspect

Comments

Popular posts from this blog

Create dynamic sitemap on ruby on rails

Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site. It’s basically a XML file describing all URLs in your page: The following example shows a Sitemap that contains just one URL and uses all optional tags. The optional tags are in italics. <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">    <url>       <loc>http://www.example.com/</loc>       <lastmod>2005-01-01</lastmod>       <changefreq>monthly</changefreq>     ...

Omniauth Linked in Ruby On Rails

def get_linkedin_user_data      omniauth = request.env["omniauth.auth"]      dat=omniauth.extra.raw_info      linked_app_key = "xxxxxxx"      linkedin_secret_key = "yyyyyyy"      client = LinkedIn::Client.new(linked_app_key,linkedin_secret_key)      client.authorize_from_access(omniauth['credentials']['token'],omniauth['credentials']['secret'])      connections=client.connections(:fields => ["id", "first-name", "last-name","picture-url"])      uid=omniauth['uid']      token=omniauth["credentials"]["token"]      secret=omniauth["credentials"]["secret"]   #linked user data     omniauth = request.env["omniauth.auth"]      data             = omniauth.info      user_name...

Install Rvm on ubuntu

sudo apt-get install libgdbm-dev libncurses5-dev automake libtool bison libffi-dev curl -L https://get.rvm.io | bash -s stable source ~/.rvm/scripts/rvm rvm install 2.0.0-p645 rvm use 2.0.0-p645 --default ruby -v rvm gemset create rails3.2.8 rvm gemset use rails3.2.8 gem install rails -v 3.2.8