ElasticSearch and Logstash not able to to index the syslogs

Description

I am trying for the first time logstash + elasticsearch + kibana to index data stored in syslog format.
In kibana, i am able to see the cluster and node. But no data is being indexed.

I see following messages when i run logstash with following conf

input {
file {
path => "/var/tmp/syslog"
type => "syslog"
}
}

filter {

}

output {
stdout { }
elasticsearch {
cluster => "VERSA-DEMO"
embedded => false
node_name => "VERSA-MASTER"
}
}

sudo java -jar logstash-1.3.2-flatjar.jar agent -f logstash.conf --debug

_discover_file_glob: /var/tmp/syslog: glob is: ["/var/tmp/syslog"] {:level=>:debug, :file=>"/opt/logstash/logstash-1.3.2-flatjar.jar!/filewatch/watch.rb", :line=>"117"}
_discover_file_glob: /var/tmp/syslog: glob is: ["/var/tmp/syslog"] {:level=>:debug, :file=>"/opt/logstash/logstash-1.3.2-flatjar.jar!/filewatch/watch.rb", :line=>"117"}
_discover_file_glob: /var/tmp/syslog: glob is: ["/var/tmp/syslog"] {:level=>:debug, :file=>"/opt/logstash/logstash-1.3.2-flatjar.jar!/filewatch/watch.rb", :line=>"117"}

In elastic search i see following logs
client=true, data=false}])
[2014-01-15 14:17:41,416][INFO ][cluster.service ] [VERSA-MASTER] removed {[VERSA-MASTER][sRcndJVeRDO8nypG-O-fZQ][inet/10.48.1.202:9301]{client=true, data=false},}, reason: zen-disco-node_failed([VERSA-MASTER][sRcndJVeRDO8nypG-O-fZQ][inet/10.48.1.202:9301]{client=true, data=false}), reason transport disconnected (with verified connect)
[2014-01-15 14:23:46,398][INFO ][cluster.service ] [VERSA-MASTER] added {[VERSA-MASTER][PhXpdEnnTGCrhh0wSNJF9Q][inet/10.48.1.202:9301]{client=true, data=false},}, reason: zen-disco-receive(join from node[[VERSA-MASTER][PhXpdEnnTGCrhh0wSNJF9Q][inet/10.48.1.202:9301]{client=true, data=false

What should I look for?

thanks
Roopa

Activity

Show:
Roopa Bayar
January 20, 2014, 8:18 PM

Sorry Ananda. I did not get back to you earlier. Problem is still there
even after changing the node name. It does not work all the time.

The sequence of steps i follow is as follows

1) I have a /var/log/syslog.txt file. This file has many log entries. It
does not get updated.
2) I start elastic search and log stash
3) I use kibana to see the logs.

I think logstash is not able to talk to elasticsearch,
Is there any debug command or log file i can see why it is not connecting?

To start re-indexing in elasticsearch, do I need to remove the data
directory under elasticSeatch

thanks
Roopa

On Mon, Jan 20, 2014 at 12:03 PM, Ananda Prakash Verma (JIRA) <

anand verma
January 20, 2014, 8:39 PM

Yes there can be many reasons why indexing is not happening. Try following and see it can help resolve your issue-

1. try stdout{} output plugin and see if you are getting it on standard output.

If not try
2. delete sincedb file-
$> rm ~/.sincedb_*
and try again

If not resolved try
3. curl -XGET 'http://localhost:9200/_cluster/health?pretty'

and see how many nodes are there in your cluster. Do curl -XGET 'http://localhost:9200/_nodes?pretty' to check which nodes are connected. Also do curl -XGET 'http://localhost:9200/_stats?pretty' and see if indeces is being created or not.

Roopa Bayar
January 20, 2014, 9:22 PM

1. did not work.

2. Tried deleting sincedb file. Did not work.

3. curl -XGET 'http://localhost:9200/_cluster/health?pretty'

root@versa-test32:/opt/logstash# curl -XGET '
http://localhost:9200/_cluster/health?pretty'
{
"cluster_name" : "VERSA-DEMO",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

2 nodes as expected.

4. curl -XGET 'http://localhost:9200/_nodes?pretty'
root@versa-test32:/opt/logstash# curl -XGET '
http://localhost:9200/_nodes?pretty'
{
"ok" : true,
"cluster_name" : "VERSA-DEMO",
"nodes" : {
"-0TMuDnhTSep94s692cRew" : {
"name" : "Nobilus",
"transport_address" : "inet/10.48.1.202:9300",
"hostname" : "versa-test32",
"version" : "0.90.10",
"http_address" : "inet/10.48.1.202:9200"
},
"OeQ-C2CcQ0O9pm2QKaw-lQ" : {
"name" : "Whiteout",
"transport_address" : "inet/10.48.1.202:9301",
"hostname" : "versa-test32",
"version" : "0.90.9", ---> I don't know why the version is
different from Nobilius
"attributes" : {
"client" : "true",
"data" : "false"
}
}
}
}

5.curl -XGET 'http://localhost:9200/_stats?pretty'
root@versa-test32:/opt/logstash# curl -XGET '
http://localhost:9200/_stats?pretty'
{
"ok" : true,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"_all" : {
"primaries" : { },
"total" : { }
},
"indices" : { }
}

No indices

thanks

Roopa

On Mon, Jan 20, 2014 at 12:40 PM, ananda verma (JIRA) <

anand verma
January 20, 2014, 10:13 PM

If stdout{} is not giving any output then there is no chance that it is parsing anything. Make sure you get stdout{} working first.

Roopa Bayar
January 20, 2014, 10:30 PM

For stdout, is the below config correct.

input {
file {
path => "/var/log/messages"
type => "syslog"
}
}

output {

  • stdout { } *
    elasticsearch {
    cluster => "VERSA-DEMO"
    embedded => false
    }
    }

Attached is the output of running logstash with this config.

Sample lines from /var/log/messages. Is it possible logstash is not able to
interpret the below data as syslog data?

<166>1 2014-01-15T19:45:41.GMT ABC ABCLog XYZId=1 inputInterface=2
outputInterface=2 sourceAddress=192.168.1.202
destinationAddress=192.168.1.255

I am attending elasticsearch training this week, I will also check with
people there.

thanks
Roopa

On Mon, Jan 20, 2014 at 2:15 PM, ananda verma (JIRA) <jira@logstash.jira.com

Assignee

anand verma

Reporter

Roopa Bayar

Labels

None

Fix versions

Affects versions

Configure