SparkSQL读取hive数据本地idea运行的方法详解
环境准备:
hadoop版本:2.6.5
spark版本:2.3.0
hive版本:1.2.2
master主机:192.168.100.201
slave1主机:192.168.100.201
pom.xml依赖如下:
4.0.0 com.spark spark_practice 1.0-SNAPSHOT UTF-8 1.8 1.8 2.3.0 junit junit 4.11 test org.apache.spark spark-core_2.11 ${spark.core.version} org.apache.spark spark-sql_2.11 ${spark.core.version} mysql mysql-connector-java 5.1.38 org.apache.spark spark-hive_2.11 2.3.0
注意:一定要将hive-site.xml配置文件放到工程resources目录下
hive-site.xml配置如下:
hive.metastore.uris thrift://192.168.100.201:9083 hive.server2.thrift.port 10000 javax.jdo.option.ConnectionURL jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName root javax.jdo.option.ConnectionPassword 123456 hive.zookeeper.quorum node01,node02,node03 hbase.zookeeper.quorum node01,node02,node03 hive.metastore.warehouse.dir /user/hive/warehouse fs.defaultFS hdfs://192.168.100.201:9000 hive.metastore.schema.verification false datanucleus.autoCreateSchema true datanucleus.autoStartMechanism checked
主类代码:
importorg.apache.spark.sql.SparkSession objectSparksqlTest2{ defmain(args:Array[String]):Unit={ valspark:SparkSession=SparkSession .builder .master("local[*]") .appName("JavaSparkHiveExample") .enableHiveSupport .getOrCreate spark.sql("showdatabases").show() spark.sql("showtables").show() spark.sql("select*fromperson").show() spark.stop() } }
前提:数据库访问的是default,表person中有三条数据。
测试前先确保hadoop集群正常启动,然后需要启动hive的metastore服务。
./bin/hive--servicemetastore
运行,结果如下:
如果报错:
Exceptioninthread"main"org.apache.spark.sql.AnalysisException:java.lang.RuntimeException:java.io.IOException:(null)entryincommandstring:nullchmod0700C:\Users\dell\AppData\Local\Temp\c530fb25-b267-4dd2-b24d-741727a6fbf3_resources;
atorg.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
atorg.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
atorg.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
atorg.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
atorg.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
atorg.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
atorg.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
atorg.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.(HiveSessionStateBuilder.scala:69)
atorg.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
atorg.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
atorg.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
atorg.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
atorg.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
atorg.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
atorg.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
atorg.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
atorg.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
atorg.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
atcom.tongfang.learn.spark.hive.HiveTest.main(HiveTest.java:15)
解决:
1.下载hadoopwindowsbinary包,链接:https://github.com/steveloughran/winutils
2.在启动类的运行参数中设置环境变量,HADOOP_HOME=D:\winutils\hadoop-2.6.4,后面是hadoopwindows二进制包的目录。
到此这篇关于SparkSQL读取hive数据本地idea运行的方法详解的文章就介绍到这了,更多相关SparkSQL读取hive数据本地idea运行内容请搜索毛票票以前的文章或继续浏览下面的相关文章希望大家以后多多支持毛票票!
声明:本文内容来源于网络,版权归原作者所有,内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:czq8825#qq.com(发邮件时,请将#更换为@)进行举报,并提供相关证据,一经查实,本站将立刻删除涉嫌侵权内容。