www.久久久久|狼友网站av天堂|精品国产无码a片|一级av色欲av|91在线播放视频|亚洲无码主播在线|国产精品草久在线|明星AV网站在线|污污内射久久一区|婷婷综合视频网站

當(dāng)前位置:首頁(yè) > > 充電吧
[導(dǎo)讀]hostname與/etc/hosts的關(guān)系 很過(guò)人一提到更改hostname首先就想到修改/etc/hosts文件,認(rèn)為hostname的配置文件就是/etc/hosts。其實(shí)不是的。 host

hostname與/etc/hosts的關(guān)系
很過(guò)人一提到更改hostname首先就想到修改/etc/hosts文件,認(rèn)為hostname的配置文件就是/etc/hosts。其實(shí)不是的。 
hosts文件的作用相當(dāng)如DNS,提供IP地址到hostname的對(duì)應(yīng)。早期的互聯(lián)網(wǎng)計(jì)算機(jī)少,單機(jī)hosts文件里足夠存放所有聯(lián)網(wǎng)計(jì)算機(jī)。不過(guò)隨著互聯(lián)網(wǎng)的發(fā)展,這就遠(yuǎn)遠(yuǎn)不夠了。于是就出現(xiàn)了分布式的DNS系統(tǒng)。由DNS服務(wù)器來(lái)提供類(lèi)似的IP地址到域名的對(duì)應(yīng)。具體可以man hosts。 
Linux系統(tǒng)在向DNS服務(wù)器發(fā)出域名解析請(qǐng)求之前會(huì)查詢(xún)/etc/hosts文件,如果里面有相應(yīng)的記錄,就會(huì)使用hosts里面的記錄。/etc/hosts文件通常里面包含這一條記錄 

轉(zhuǎn)至 https://my.oschina.net/xhhuang/blog/807914

一、硬件環(huán)境

我使用的硬件是云創(chuàng)的一個(gè)minicloud設(shè)備。由三個(gè)節(jié)點(diǎn)(每個(gè)節(jié)點(diǎn)8GB內(nèi)存+128GB SSD+3塊3TB SATA)和一個(gè)千兆交換機(jī)組成。

二、安裝前準(zhǔn)備

1.在CentOS 7下新建hadoop用戶(hù),官方推薦的是hadoop、mapreduce、yarn分別用不同的用戶(hù)安裝,這里我為了省事就全部在hadoop用戶(hù)下安裝了。

2.下載安裝包:

1)JDK:jdk-8u112-linux-x64.rpm

下載地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
2)Hadoop-2.7.3:hadoop-2.7.3.tar.gz

下載地址:http://archive.apache.org/dist/hadoop/common/stable2/
3.卸載CentOS 7自帶的OpenJDK(root權(quán)限下)

1)首先查看系統(tǒng)已有的openjdk

rpm -qa|grep jdk

看到如下結(jié)果:

[hadoop@localhost Desktop]$ rpm -qa|grep jdk
java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64
java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64
java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64
java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64

2)卸載上述找到的openjdk包

yum -y remove java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64
yum -y remove java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64
yum -y remove java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64
yum -y remove java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64

4.安裝Oracle JDK(root權(quán)限下)

rpm -ivh jdk-8u112-linux-x64.rpm

安裝完畢后,jdk的路徑為/usr/java/jdk1.8.0_112

接著將安裝的jdk的路徑添加至系統(tǒng)環(huán)境變量中:

vi /etc/profile

在文件末尾加上如下內(nèi)容:

export JAVA_HOME=/usr/java/jdk1.8.0_112
export JRE_HOME=/usr/java/jdk1.8.0_112/jre
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

關(guān)閉profile文件,執(zhí)行下列命令使配置生效:

source /etc/profile

此時(shí)我們就可以通過(guò)java -version命令檢查jdk路徑是否配置成功,如下所示:

[root@localhost jdk1.8.0_112]# java -version
java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)
[root@localhost jdk1.8.0_112]#

5.關(guān)閉防火墻(root權(quán)限下)

執(zhí)行下述命令關(guān)閉防火墻:

systemctl stop firewalld.service  
systemctl disable firewalld.service

在終端效果如下:

[root@localhost Desktop]# systemctl stop firewalld.service 
[root@localhost Desktop]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@localhost Desktop]#

6.修改主機(jī)名并配置相關(guān)網(wǎng)絡(luò)(root權(quán)限下)

1)修改主機(jī)名

在master主機(jī)上

hostnamectl set-hostname Master

在slave1主機(jī)上

hostnamectl set-hostname slave1

在slave2主機(jī)上

hostnamectl set-hostname slave2

2)配置網(wǎng)絡(luò)

以master主機(jī)為例,演示如何配置靜態(tài)網(wǎng)絡(luò)及host文件。

我的機(jī)器每個(gè)節(jié)點(diǎn)有兩塊網(wǎng)卡,我配置其中一塊網(wǎng)卡為靜態(tài)IP作為節(jié)點(diǎn)內(nèi)部通信使用。

vi /etc/sysconfig/network-scripts/ifcfg-enp7s0

(注:我的master機(jī)器上要配置的網(wǎng)卡名稱(chēng)為ifcfg-enp7s0)

ifcfg-enp7s0原始內(nèi)容如下:

TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp7s0
UUID=914595f1-e6f9-4c9b-856a-c4bd79ffe987
DEVICE=enp7s0
ONBOOT=no

修改為:

TYPE=Ethernet
ONBOOT=yes
DEVICE=enp7s0
UUID=914595f1-e6f9-4c9b-856a-c4bd79ffe987
BOOTPROTO=static
IPADDR=59.71.229.189
GATEWAY=59.71.229.254
DEFROUTE=yes
IPV6INIT=no
IPV4_FAILURE_FATAL=yes

3)修改/etc/hosts文件

vi /etc/hosts

加入以下內(nèi)容:

59.71.229.189 master
59.71.229.190 slave1
59.71.229.191 slave2

為集群中所有節(jié)點(diǎn)執(zhí)行上述的網(wǎng)絡(luò)配置及hosts文件配置。

7.配置集群節(jié)點(diǎn)SSH免密碼登錄(hadoop權(quán)限下)

這里我為了方便,是配置的集群中任意節(jié)點(diǎn)能夠SSH免密碼登錄到集群其他任意節(jié)點(diǎn)。具體步驟如下:

1)對(duì)于每一臺(tái)機(jī)器,在hadoop用戶(hù)下執(zhí)行以下指令:

ssh-keygen -t rsa -P ''

直接按Enter到底。

2)對(duì)于每臺(tái)機(jī)器,首先將自己的公鑰加到authorized_keys中,保證ssh localhost無(wú)密碼登錄:

cat id_rsa.pub >> authorized_keys

3)然后將自己的公鑰添加至其他每臺(tái)機(jī)器的authorized_keys中,在此過(guò)程中需要輸入其他機(jī)器的密碼:

master:

scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/.ssh/id_rsa_master.pub
scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/.ssh/id_rsa_master.pub

slave1:

scp /home/hadoop/.ssh/id_rsa.pub hadoop@master:/home/hadoop/.ssh/id_rsa_slave1.pub
scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/.ssh/id_rsa_slave1.pub

slave2:

scp /home/hadoop/.ssh/id_rsa.pub hadoop@master:/home/hadoop/.ssh/id_rsa_slave2.pub
scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/.ssh/id_rsa_slave2.pub

4)分別進(jìn)每一臺(tái)主機(jī)的/home/hadoop/.ssh/目錄下,將除本機(jī)產(chǎn)生的公鑰(id_rsa.pub)之外的其他公鑰使用cat命令添加至authorized_keys中。添加完畢之后使用chmod命令給authorized_keys文件設(shè)置權(quán)限,然后使用rm命令刪除所有的公鑰:

master:

cat id_rsa_slave1.pub >> authorized_keys
cat id_rsa_slave2.pub >> authorized_keys
chmod 600 authorized_keys
rm id_rsa*.pub

slave1:

cat id_rsa_master.pub >> authorized_keys
cat id_rsa_slave2.pub >> authorized_keys
chmod 600 authorized_keys
rm id_rsa*.pub

slave2:

cat id_rsa_master.pub >> authorized_keys
cat id_rsa_slave1.pub >> authorized_keys
chmod 600 authorized_keys
rm id_rsa*.pub

完成上述步驟,就可以實(shí)現(xiàn)從任意一臺(tái)機(jī)器通過(guò)ssh命令免密碼登錄任意一臺(tái)其他機(jī)器了。

三、安裝和配置Hadoop(下述步驟在hadoop用戶(hù)下執(zhí)行)

1.將hadoop-2.7.3.tar.gz文件解壓至/home/hadoop/目錄下(在本文檔中,文件所在地是hadoop賬戶(hù)下桌面上)可通過(guò)下述命令先解壓至文件所在地:

tar -zxvf hadoop-2.7.3.tar.gz

然后將解壓的文件hadoop-2.7.3所有內(nèi)容拷貝至/home/hadoop目錄下,拷貝之后刪除文件所在地的hadoop文件夾:

cp -r /home/hadoop/Desktop/hadoop-2.7.3 /home/hadoop/

2.具體配置過(guò)程:

1)在master上,首先/home/hadoop/目錄下創(chuàng)建以下目錄:

mkdir -p /home/hadoop/hadoopdir/name
mkdir -p /home/hadoop/hadoopdir/data
mkdir -p /home/hadoop/hadoopdir/temp
mkdir -p /home/hadoop/hadoopdir/logs
mkdir -p /home/hadoop/hadoopdir/pids

2)然后通過(guò)scp命令將hadoopdir目錄復(fù)制至其他節(jié)點(diǎn):

scp -r /home/hadoop/hadoopdir hadoop@slave1:/home/hadoop/
scp -r /home/hadoop/hadoopdir hadoop@slave2:/home/hadoop/

3)進(jìn)入/home/hadoop/hadoop-2.7.3/etc/hadoop目錄下,修改以下文件:

hadoop-env.sh:

export JAVA_HOME=/usr/java/jdk1.8.0_112
export HADOOP_LOG_DIR=/home/hadoop/hadoopdir/logs
export HADOOP_PID_DIR=/home/hadoop/hadoopdir/pids

mapred-env.sh:

export JAVA_HOME=/usr/java/jdk1.8.0_112
export HADOOP_MAPRED_LOG_DIR=/home/hadoop/hadoopdir/logs
export HADOOP_MAPRED_PID_DIR=/home/hadoop/hadoopdir/pids

yarn-env.sh:

export JAVA_HOME=/usr/java/jdk1.8.0_112
YARN_LOG_DIR=/home/hadoop/hadoopdir/logs

Slaves文件:

#localhost
slave1
slave2

(注意:如果slaves文件里面不注釋localhost,意思是把本機(jī)也作為一個(gè)DataNode節(jié)點(diǎn))

core-site.xml:


    
        fs.defaultFS
        hdfs://master:9000   
    
    
        io.file.buffer.size
        131072
    
    
        hadoop.tmp.dir
        file:///home/hadoop/hadoopdir/temp
    


hdfs-site.xml:


    
        dfs.namenode.name.dir
        file:///home/hadoop/hadoopdir/name
    
    
        dfs.datanode.data.dir
        file:///home/hadoop/hadoopdir/data
    
    
        dfs.replication
        2
    
    
        dfs.blocksize
        64m
    
    
        dfs.namenode.secondary.http-address
        master:9001
    
    
        dfs.webhdfs.enabled
        true
    


mapred-site.xml:

cp mapred-site.xml.template mapred-site.xml
vi mapred-site.xml

    
        mapreduce.framework.name
        yarn
        true
    
    
        mapreduce.jobhistory.address
        master:10020
    
    
        mapreduce.jobtracker.http.address
        master:50030
    
    
        mapred.job.tracker
        http://master:9001
    
    
        mapreduce.jobhistory.webapp.address
        master:19888
    


yarn-site.xml:


    yarn.nodemanager.aux-services
    mapreduce_shuffle


    yarn.nodemanager.aux-services.mapreduce_shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler


    yarn.resourcemanager.hostname
    master


    yarn.resourcemanager.scheduler.address
    master:8030


    yarn.resourcemanager.resource-tracker.address
    master:8031


    yarn.resourcemanager.address
    master:8032


    yarn.resourcemanager.admin.address
    master:8033


    yarn.resourcemanager.webapp.address
    master:8088

4)master機(jī)器下,將/home/hadoop/hadoop-2.7.3目錄里面所有內(nèi)容拷貝至其他節(jié)點(diǎn)

scp -r /home/hadoop/hadoop-2.7.3 hadoop@slave1:/home/hadoop/
scp -r /home/hadoop/hadoop-2.7.3 hadoop@slave2:/home/hadoop/

5)進(jìn)入/home/hadoop/hadoop-2.7.3/bin目錄,格式化文件系統(tǒng):

./hdfs namenode -format

格式化文件系統(tǒng)會(huì)產(chǎn)生一系列的終端輸出,在輸出最后幾行看到STATUS=0表示格式化成功,如果格式化失敗請(qǐng)?jiān)敿?xì)查看日志確定錯(cuò)誤原因。
下面是我的錯(cuò)誤和解決方法:

17/10/26 19:44:34 INFO ipc.Client: Retrying connect to server: slave2/192.168.84.202:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:34 INFO ipc.Client: Retrying connect to server: slave3/192.168.84.203:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:34 INFO ipc.Client: Retrying connect to server: slave1/192.168.84.201:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:35 INFO ipc.Client: Retrying connect to server: slave2/192.168.84.202:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:35 INFO ipc.Client: Retrying connect to server: slave3/192.168.84.203:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:35 INFO ipc.Client: Retrying connect to server: slave1/192.168.84.201:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:36 INFO ipc.Client: Retrying connect to server: slave2/192.168.84.202:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:36 WARN namenode.NameNode: Encountered exception during format: 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.84.202:8485: Call From master/192.168.84.200 to slave2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:184)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:988)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
17/10/26 19:44:36 INFO ipc.Client: Retrying connect to server: slave1/192.168.84.201:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:36 INFO ipc.Client: Retrying connect to server: slave3/192.168.84.203:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
17/10/26 19:44:36 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.84.202:8485: Call From master/192.168.84.200 to slave2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:184)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:988)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
17/10/26 19:44:36 INFO util.ExitUtil: Exiting with status 1
17/10/26 19:44:36 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.84.200
************************************************************/


6)進(jìn)入/home/hadoop/hadoop-2.7.3/sbin目錄:

./start-dfs.sh
./start-yarn.sh

上述命令就啟動(dòng)了hdfs和yarn。hadoop集群就跑起來(lái)了,如果要關(guān)閉,在sbin目錄下執(zhí)行以下命令:

./stop-yarn.sh
./stop-dfs.sh

7)HDFS啟動(dòng)示例

執(zhí)行start-dfs.sh之后,可以在master:50070網(wǎng)頁(yè)上看到如下結(jié)果,可以看到集群信息和datanode相關(guān)信息:

執(zhí)行start-yarn.sh之后,可以在master:8088網(wǎng)頁(yè)上看到如下結(jié)果,可以看到集群信息相關(guān)信息:

本站聲明: 本文章由作者或相關(guān)機(jī)構(gòu)授權(quán)發(fā)布,目的在于傳遞更多信息,并不代表本站贊同其觀點(diǎn),本站亦不保證或承諾內(nèi)容真實(shí)性等。需要轉(zhuǎn)載請(qǐng)聯(lián)系該專(zhuān)欄作者,如若文章內(nèi)容侵犯您的權(quán)益,請(qǐng)及時(shí)聯(lián)系本站刪除。
換一批
延伸閱讀

LED驅(qū)動(dòng)電源的輸入包括高壓工頻交流(即市電)、低壓直流、高壓直流、低壓高頻交流(如電子變壓器的輸出)等。

關(guān)鍵字: 驅(qū)動(dòng)電源

在工業(yè)自動(dòng)化蓬勃發(fā)展的當(dāng)下,工業(yè)電機(jī)作為核心動(dòng)力設(shè)備,其驅(qū)動(dòng)電源的性能直接關(guān)系到整個(gè)系統(tǒng)的穩(wěn)定性和可靠性。其中,反電動(dòng)勢(shì)抑制與過(guò)流保護(hù)是驅(qū)動(dòng)電源設(shè)計(jì)中至關(guān)重要的兩個(gè)環(huán)節(jié),集成化方案的設(shè)計(jì)成為提升電機(jī)驅(qū)動(dòng)性能的關(guān)鍵。

關(guān)鍵字: 工業(yè)電機(jī) 驅(qū)動(dòng)電源

LED 驅(qū)動(dòng)電源作為 LED 照明系統(tǒng)的 “心臟”,其穩(wěn)定性直接決定了整個(gè)照明設(shè)備的使用壽命。然而,在實(shí)際應(yīng)用中,LED 驅(qū)動(dòng)電源易損壞的問(wèn)題卻十分常見(jiàn),不僅增加了維護(hù)成本,還影響了用戶(hù)體驗(yàn)。要解決這一問(wèn)題,需從設(shè)計(jì)、生...

關(guān)鍵字: 驅(qū)動(dòng)電源 照明系統(tǒng) 散熱

根據(jù)LED驅(qū)動(dòng)電源的公式,電感內(nèi)電流波動(dòng)大小和電感值成反比,輸出紋波和輸出電容值成反比。所以加大電感值和輸出電容值可以減小紋波。

關(guān)鍵字: LED 設(shè)計(jì) 驅(qū)動(dòng)電源

電動(dòng)汽車(chē)(EV)作為新能源汽車(chē)的重要代表,正逐漸成為全球汽車(chē)產(chǎn)業(yè)的重要發(fā)展方向。電動(dòng)汽車(chē)的核心技術(shù)之一是電機(jī)驅(qū)動(dòng)控制系統(tǒng),而絕緣柵雙極型晶體管(IGBT)作為電機(jī)驅(qū)動(dòng)系統(tǒng)中的關(guān)鍵元件,其性能直接影響到電動(dòng)汽車(chē)的動(dòng)力性能和...

關(guān)鍵字: 電動(dòng)汽車(chē) 新能源 驅(qū)動(dòng)電源

在現(xiàn)代城市建設(shè)中,街道及停車(chē)場(chǎng)照明作為基礎(chǔ)設(shè)施的重要組成部分,其質(zhì)量和效率直接關(guān)系到城市的公共安全、居民生活質(zhì)量和能源利用效率。隨著科技的進(jìn)步,高亮度白光發(fā)光二極管(LED)因其獨(dú)特的優(yōu)勢(shì)逐漸取代傳統(tǒng)光源,成為大功率區(qū)域...

關(guān)鍵字: 發(fā)光二極管 驅(qū)動(dòng)電源 LED

LED通用照明設(shè)計(jì)工程師會(huì)遇到許多挑戰(zhàn),如功率密度、功率因數(shù)校正(PFC)、空間受限和可靠性等。

關(guān)鍵字: LED 驅(qū)動(dòng)電源 功率因數(shù)校正

在LED照明技術(shù)日益普及的今天,LED驅(qū)動(dòng)電源的電磁干擾(EMI)問(wèn)題成為了一個(gè)不可忽視的挑戰(zhàn)。電磁干擾不僅會(huì)影響LED燈具的正常工作,還可能對(duì)周?chē)娮釉O(shè)備造成不利影響,甚至引發(fā)系統(tǒng)故障。因此,采取有效的硬件措施來(lái)解決L...

關(guān)鍵字: LED照明技術(shù) 電磁干擾 驅(qū)動(dòng)電源

開(kāi)關(guān)電源具有效率高的特性,而且開(kāi)關(guān)電源的變壓器體積比串聯(lián)穩(wěn)壓型電源的要小得多,電源電路比較整潔,整機(jī)重量也有所下降,所以,現(xiàn)在的LED驅(qū)動(dòng)電源

關(guān)鍵字: LED 驅(qū)動(dòng)電源 開(kāi)關(guān)電源

LED驅(qū)動(dòng)電源是把電源供應(yīng)轉(zhuǎn)換為特定的電壓電流以驅(qū)動(dòng)LED發(fā)光的電壓轉(zhuǎn)換器,通常情況下:LED驅(qū)動(dòng)電源的輸入包括高壓工頻交流(即市電)、低壓直流、高壓直流、低壓高頻交流(如電子變壓器的輸出)等。

關(guān)鍵字: LED 隧道燈 驅(qū)動(dòng)電源
關(guān)閉